00:00:00.001 Started by upstream project "autotest-per-patch" build number 124192 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.082 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.082 The recommended git tool is: git 00:00:00.082 using credential 00000000-0000-0000-0000-000000000002 00:00:00.084 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.108 Fetching changes from the remote Git repository 00:00:00.110 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.139 Using shallow fetch with depth 1 00:00:00.139 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.139 > git --version # timeout=10 00:00:00.170 > git --version # 'git version 2.39.2' 00:00:00.170 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.205 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.205 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.746 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.759 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.772 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:03.772 > git config core.sparsecheckout # timeout=10 00:00:03.785 > git read-tree -mu HEAD # timeout=10 00:00:03.802 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:03.820 Commit message: "pool: fixes for VisualBuild class" 00:00:03.820 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:03.901 [Pipeline] Start of Pipeline 00:00:03.915 [Pipeline] library 00:00:03.917 Loading library shm_lib@master 00:00:07.501 Library shm_lib@master is cached. Copying from home. 00:00:07.530 [Pipeline] node 00:00:07.582 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:07.587 [Pipeline] { 00:00:07.597 [Pipeline] catchError 00:00:07.599 [Pipeline] { 00:00:07.608 [Pipeline] wrap 00:00:07.618 [Pipeline] { 00:00:07.625 [Pipeline] stage 00:00:07.628 [Pipeline] { (Prologue) 00:00:07.657 [Pipeline] echo 00:00:07.661 Node: VM-host-SM17 00:00:07.669 [Pipeline] cleanWs 00:00:07.677 [WS-CLEANUP] Deleting project workspace... 00:00:07.677 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.684 [WS-CLEANUP] done 00:00:07.839 [Pipeline] setCustomBuildProperty 00:00:07.885 [Pipeline] nodesByLabel 00:00:07.886 Found a total of 2 nodes with the 'sorcerer' label 00:00:07.894 [Pipeline] httpRequest 00:00:07.898 HttpMethod: GET 00:00:07.898 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:07.903 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:07.911 Response Code: HTTP/1.1 200 OK 00:00:07.912 Success: Status code 200 is in the accepted range: 200,404 00:00:07.913 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:10.171 [Pipeline] sh 00:00:10.449 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:10.462 [Pipeline] httpRequest 00:00:10.465 HttpMethod: GET 00:00:10.465 URL: http://10.211.164.101/packages/spdk_3a44739b7d3100784f7efecc8e3eb1995fd1f244.tar.gz 00:00:10.466 Sending request to url: http://10.211.164.101/packages/spdk_3a44739b7d3100784f7efecc8e3eb1995fd1f244.tar.gz 00:00:10.479 Response Code: HTTP/1.1 200 OK 00:00:10.479 Success: Status code 200 is in the accepted range: 200,404 00:00:10.480 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_3a44739b7d3100784f7efecc8e3eb1995fd1f244.tar.gz 00:01:09.908 [Pipeline] sh 00:01:10.186 + tar --no-same-owner -xf spdk_3a44739b7d3100784f7efecc8e3eb1995fd1f244.tar.gz 00:01:13.525 [Pipeline] sh 00:01:13.806 + git -C spdk log --oneline -n5 00:01:13.806 3a44739b7 nvmf/tcp: move await_req handling to nvmf_tcp_req_put() 00:01:13.806 be02286f6 nvmf: move register nvmf_poll_group_poll interrupt to nvmf 00:01:13.806 9b5203592 nvmf/tcp: replace pending_buf_queue with iobuf callbacks 00:01:13.806 d216ec301 nvmf: extend API to request buffer with iobuf callback 00:01:13.806 9a8d8bdaa nvmf/tcp: use sock group polling for the listening sockets 00:01:13.825 [Pipeline] writeFile 00:01:13.841 [Pipeline] sh 00:01:14.126 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:14.139 [Pipeline] sh 00:01:14.419 + cat autorun-spdk.conf 00:01:14.419 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.419 SPDK_TEST_NVMF=1 00:01:14.419 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.419 SPDK_TEST_URING=1 00:01:14.419 SPDK_TEST_USDT=1 00:01:14.419 SPDK_RUN_UBSAN=1 00:01:14.419 NET_TYPE=virt 00:01:14.419 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:14.426 RUN_NIGHTLY=0 00:01:14.428 [Pipeline] } 00:01:14.445 [Pipeline] // stage 00:01:14.461 [Pipeline] stage 00:01:14.463 [Pipeline] { (Run VM) 00:01:14.477 [Pipeline] sh 00:01:14.759 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:14.759 + echo 'Start stage prepare_nvme.sh' 00:01:14.759 Start stage prepare_nvme.sh 00:01:14.759 + [[ -n 3 ]] 00:01:14.759 + disk_prefix=ex3 00:01:14.759 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:14.759 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:14.759 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:14.759 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.759 ++ SPDK_TEST_NVMF=1 00:01:14.759 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.759 ++ SPDK_TEST_URING=1 00:01:14.759 ++ SPDK_TEST_USDT=1 00:01:14.759 ++ SPDK_RUN_UBSAN=1 00:01:14.759 ++ NET_TYPE=virt 00:01:14.759 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:14.759 ++ RUN_NIGHTLY=0 00:01:14.759 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:14.759 + nvme_files=() 00:01:14.759 + declare -A nvme_files 00:01:14.759 + backend_dir=/var/lib/libvirt/images/backends 00:01:14.759 + nvme_files['nvme.img']=5G 00:01:14.759 + nvme_files['nvme-cmb.img']=5G 00:01:14.759 + nvme_files['nvme-multi0.img']=4G 00:01:14.759 + nvme_files['nvme-multi1.img']=4G 00:01:14.759 + nvme_files['nvme-multi2.img']=4G 00:01:14.759 + nvme_files['nvme-openstack.img']=8G 00:01:14.759 + nvme_files['nvme-zns.img']=5G 00:01:14.759 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:14.759 + (( SPDK_TEST_FTL == 1 )) 00:01:14.759 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:14.759 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:14.759 + for nvme in "${!nvme_files[@]}" 00:01:14.759 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:14.759 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:14.759 + for nvme in "${!nvme_files[@]}" 00:01:14.759 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:14.759 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:14.759 + for nvme in "${!nvme_files[@]}" 00:01:14.759 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:14.759 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:14.759 + for nvme in "${!nvme_files[@]}" 00:01:14.759 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:14.759 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:14.759 + for nvme in "${!nvme_files[@]}" 00:01:14.759 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:14.759 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:14.759 + for nvme in "${!nvme_files[@]}" 00:01:14.759 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:14.759 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:14.759 + for nvme in "${!nvme_files[@]}" 00:01:14.759 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:15.018 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.018 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:15.018 + echo 'End stage prepare_nvme.sh' 00:01:15.018 End stage prepare_nvme.sh 00:01:15.030 [Pipeline] sh 00:01:15.312 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:15.312 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora38 00:01:15.312 00:01:15.312 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:15.312 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:15.312 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:15.312 HELP=0 00:01:15.312 DRY_RUN=0 00:01:15.312 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:01:15.312 NVME_DISKS_TYPE=nvme,nvme, 00:01:15.312 NVME_AUTO_CREATE=0 00:01:15.312 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:01:15.312 NVME_CMB=,, 00:01:15.312 NVME_PMR=,, 00:01:15.312 NVME_ZNS=,, 00:01:15.312 NVME_MS=,, 00:01:15.312 NVME_FDP=,, 00:01:15.312 SPDK_VAGRANT_DISTRO=fedora38 00:01:15.312 SPDK_VAGRANT_VMCPU=10 00:01:15.312 SPDK_VAGRANT_VMRAM=12288 00:01:15.312 SPDK_VAGRANT_PROVIDER=libvirt 00:01:15.312 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:15.312 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:15.312 SPDK_OPENSTACK_NETWORK=0 00:01:15.312 VAGRANT_PACKAGE_BOX=0 00:01:15.312 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:15.312 FORCE_DISTRO=true 00:01:15.312 VAGRANT_BOX_VERSION= 00:01:15.312 EXTRA_VAGRANTFILES= 00:01:15.312 NIC_MODEL=e1000 00:01:15.312 00:01:15.312 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:01:15.312 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:18.600 Bringing machine 'default' up with 'libvirt' provider... 00:01:19.166 ==> default: Creating image (snapshot of base box volume). 00:01:19.166 ==> default: Creating domain with the following settings... 00:01:19.166 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1718006200_75416d3649f916eb4128 00:01:19.166 ==> default: -- Domain type: kvm 00:01:19.166 ==> default: -- Cpus: 10 00:01:19.166 ==> default: -- Feature: acpi 00:01:19.166 ==> default: -- Feature: apic 00:01:19.166 ==> default: -- Feature: pae 00:01:19.166 ==> default: -- Memory: 12288M 00:01:19.166 ==> default: -- Memory Backing: hugepages: 00:01:19.166 ==> default: -- Management MAC: 00:01:19.166 ==> default: -- Loader: 00:01:19.166 ==> default: -- Nvram: 00:01:19.166 ==> default: -- Base box: spdk/fedora38 00:01:19.166 ==> default: -- Storage pool: default 00:01:19.166 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1718006200_75416d3649f916eb4128.img (20G) 00:01:19.166 ==> default: -- Volume Cache: default 00:01:19.166 ==> default: -- Kernel: 00:01:19.166 ==> default: -- Initrd: 00:01:19.166 ==> default: -- Graphics Type: vnc 00:01:19.166 ==> default: -- Graphics Port: -1 00:01:19.166 ==> default: -- Graphics IP: 127.0.0.1 00:01:19.166 ==> default: -- Graphics Password: Not defined 00:01:19.166 ==> default: -- Video Type: cirrus 00:01:19.166 ==> default: -- Video VRAM: 9216 00:01:19.166 ==> default: -- Sound Type: 00:01:19.166 ==> default: -- Keymap: en-us 00:01:19.166 ==> default: -- TPM Path: 00:01:19.166 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:19.166 ==> default: -- Command line args: 00:01:19.166 ==> default: -> value=-device, 00:01:19.166 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:19.166 ==> default: -> value=-drive, 00:01:19.166 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:01:19.166 ==> default: -> value=-device, 00:01:19.166 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.166 ==> default: -> value=-device, 00:01:19.166 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:19.166 ==> default: -> value=-drive, 00:01:19.166 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:19.166 ==> default: -> value=-device, 00:01:19.166 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.166 ==> default: -> value=-drive, 00:01:19.166 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:19.166 ==> default: -> value=-device, 00:01:19.166 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.166 ==> default: -> value=-drive, 00:01:19.166 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:19.166 ==> default: -> value=-device, 00:01:19.166 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.425 ==> default: Creating shared folders metadata... 00:01:19.425 ==> default: Starting domain. 00:01:21.323 ==> default: Waiting for domain to get an IP address... 00:01:36.227 ==> default: Waiting for SSH to become available... 00:01:37.601 ==> default: Configuring and enabling network interfaces... 00:01:41.828 default: SSH address: 192.168.121.177:22 00:01:41.828 default: SSH username: vagrant 00:01:41.828 default: SSH auth method: private key 00:01:43.204 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:51.318 ==> default: Mounting SSHFS shared folder... 00:01:52.251 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:52.251 ==> default: Checking Mount.. 00:01:53.626 ==> default: Folder Successfully Mounted! 00:01:53.626 ==> default: Running provisioner: file... 00:01:54.193 default: ~/.gitconfig => .gitconfig 00:01:54.452 00:01:54.452 SUCCESS! 00:01:54.452 00:01:54.452 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:54.452 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:54.452 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:54.452 00:01:54.461 [Pipeline] } 00:01:54.479 [Pipeline] // stage 00:01:54.488 [Pipeline] dir 00:01:54.489 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:01:54.490 [Pipeline] { 00:01:54.504 [Pipeline] catchError 00:01:54.506 [Pipeline] { 00:01:54.520 [Pipeline] sh 00:01:54.800 + vagrant ssh-config --host vagrant 00:01:54.800 + sed -ne /^Host/,$p 00:01:54.800 + tee ssh_conf 00:01:58.084 Host vagrant 00:01:58.084 HostName 192.168.121.177 00:01:58.084 User vagrant 00:01:58.084 Port 22 00:01:58.084 UserKnownHostsFile /dev/null 00:01:58.084 StrictHostKeyChecking no 00:01:58.084 PasswordAuthentication no 00:01:58.084 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:58.084 IdentitiesOnly yes 00:01:58.084 LogLevel FATAL 00:01:58.084 ForwardAgent yes 00:01:58.084 ForwardX11 yes 00:01:58.084 00:01:58.097 [Pipeline] withEnv 00:01:58.100 [Pipeline] { 00:01:58.118 [Pipeline] sh 00:01:58.396 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:58.396 source /etc/os-release 00:01:58.396 [[ -e /image.version ]] && img=$(< /image.version) 00:01:58.396 # Minimal, systemd-like check. 00:01:58.396 if [[ -e /.dockerenv ]]; then 00:01:58.396 # Clear garbage from the node's name: 00:01:58.396 # agt-er_autotest_547-896 -> autotest_547-896 00:01:58.396 # $HOSTNAME is the actual container id 00:01:58.396 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:58.396 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:58.396 # We can assume this is a mount from a host where container is running, 00:01:58.396 # so fetch its hostname to easily identify the target swarm worker. 00:01:58.396 container="$(< /etc/hostname) ($agent)" 00:01:58.396 else 00:01:58.396 # Fallback 00:01:58.396 container=$agent 00:01:58.396 fi 00:01:58.396 fi 00:01:58.396 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:58.396 00:01:58.404 [Pipeline] } 00:01:58.413 [Pipeline] // withEnv 00:01:58.419 [Pipeline] setCustomBuildProperty 00:01:58.427 [Pipeline] stage 00:01:58.429 [Pipeline] { (Tests) 00:01:58.440 [Pipeline] sh 00:01:58.712 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:58.726 [Pipeline] sh 00:01:59.003 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:59.277 [Pipeline] timeout 00:01:59.277 Timeout set to expire in 30 min 00:01:59.279 [Pipeline] { 00:01:59.295 [Pipeline] sh 00:01:59.578 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:00.144 HEAD is now at 3a44739b7 nvmf/tcp: move await_req handling to nvmf_tcp_req_put() 00:02:00.158 [Pipeline] sh 00:02:00.436 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:00.706 [Pipeline] sh 00:02:00.999 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:01.016 [Pipeline] sh 00:02:01.296 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:01.554 ++ readlink -f spdk_repo 00:02:01.554 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:01.554 + [[ -n /home/vagrant/spdk_repo ]] 00:02:01.554 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:01.554 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:01.554 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:01.554 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:01.554 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:01.554 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:01.554 + cd /home/vagrant/spdk_repo 00:02:01.554 + source /etc/os-release 00:02:01.554 ++ NAME='Fedora Linux' 00:02:01.554 ++ VERSION='38 (Cloud Edition)' 00:02:01.554 ++ ID=fedora 00:02:01.554 ++ VERSION_ID=38 00:02:01.554 ++ VERSION_CODENAME= 00:02:01.554 ++ PLATFORM_ID=platform:f38 00:02:01.554 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:01.555 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:01.555 ++ LOGO=fedora-logo-icon 00:02:01.555 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:01.555 ++ HOME_URL=https://fedoraproject.org/ 00:02:01.555 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:01.555 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:01.555 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:01.555 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:01.555 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:01.555 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:01.555 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:01.555 ++ SUPPORT_END=2024-05-14 00:02:01.555 ++ VARIANT='Cloud Edition' 00:02:01.555 ++ VARIANT_ID=cloud 00:02:01.555 + uname -a 00:02:01.555 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:01.555 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:01.812 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:01.812 Hugepages 00:02:01.812 node hugesize free / total 00:02:02.071 node0 1048576kB 0 / 0 00:02:02.071 node0 2048kB 0 / 0 00:02:02.071 00:02:02.071 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:02.071 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:02.071 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:02.071 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:02.071 + rm -f /tmp/spdk-ld-path 00:02:02.071 + source autorun-spdk.conf 00:02:02.071 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.071 ++ SPDK_TEST_NVMF=1 00:02:02.071 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:02.071 ++ SPDK_TEST_URING=1 00:02:02.071 ++ SPDK_TEST_USDT=1 00:02:02.071 ++ SPDK_RUN_UBSAN=1 00:02:02.071 ++ NET_TYPE=virt 00:02:02.071 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:02.071 ++ RUN_NIGHTLY=0 00:02:02.071 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:02.071 + [[ -n '' ]] 00:02:02.071 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:02.071 + for M in /var/spdk/build-*-manifest.txt 00:02:02.071 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:02.071 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:02.071 + for M in /var/spdk/build-*-manifest.txt 00:02:02.071 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:02.071 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:02.071 ++ uname 00:02:02.071 + [[ Linux == \L\i\n\u\x ]] 00:02:02.071 + sudo dmesg -T 00:02:02.071 + sudo dmesg --clear 00:02:02.071 + dmesg_pid=5104 00:02:02.071 + [[ Fedora Linux == FreeBSD ]] 00:02:02.071 + sudo dmesg -Tw 00:02:02.071 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.071 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.071 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:02.071 + [[ -x /usr/src/fio-static/fio ]] 00:02:02.071 + export FIO_BIN=/usr/src/fio-static/fio 00:02:02.071 + FIO_BIN=/usr/src/fio-static/fio 00:02:02.071 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:02.071 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:02.071 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:02.071 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:02.071 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:02.071 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:02.071 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:02.071 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:02.071 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:02.071 Test configuration: 00:02:02.071 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.071 SPDK_TEST_NVMF=1 00:02:02.071 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:02.071 SPDK_TEST_URING=1 00:02:02.071 SPDK_TEST_USDT=1 00:02:02.071 SPDK_RUN_UBSAN=1 00:02:02.071 NET_TYPE=virt 00:02:02.071 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:02.329 RUN_NIGHTLY=0 07:57:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:02.329 07:57:23 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:02.329 07:57:23 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:02.329 07:57:23 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:02.329 07:57:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.329 07:57:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.329 07:57:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.329 07:57:23 -- paths/export.sh@5 -- $ export PATH 00:02:02.329 07:57:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.329 07:57:23 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:02.329 07:57:23 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:02.330 07:57:23 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718006243.XXXXXX 00:02:02.330 07:57:23 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718006243.0x9UXf 00:02:02.330 07:57:23 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:02.330 07:57:23 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:02:02.330 07:57:23 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:02.330 07:57:23 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:02.330 07:57:23 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:02.330 07:57:23 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:02.330 07:57:23 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:02.330 07:57:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.330 07:57:24 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:02:02.330 07:57:24 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:02.330 07:57:24 -- pm/common@17 -- $ local monitor 00:02:02.330 07:57:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.330 07:57:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.330 07:57:24 -- pm/common@25 -- $ sleep 1 00:02:02.330 07:57:24 -- pm/common@21 -- $ date +%s 00:02:02.330 07:57:24 -- pm/common@21 -- $ date +%s 00:02:02.330 07:57:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1718006244 00:02:02.330 07:57:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1718006244 00:02:02.330 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1718006244_collect-vmstat.pm.log 00:02:02.330 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1718006244_collect-cpu-load.pm.log 00:02:03.265 07:57:25 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:03.265 07:57:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:03.265 07:57:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:03.265 07:57:25 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:03.265 07:57:25 -- spdk/autobuild.sh@16 -- $ date -u 00:02:03.265 Mon Jun 10 07:57:25 AM UTC 2024 00:02:03.265 07:57:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:03.265 v24.09-pre-62-g3a44739b7 00:02:03.265 07:57:25 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:03.265 07:57:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:03.265 07:57:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:03.265 07:57:25 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:02:03.265 07:57:25 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:02:03.265 07:57:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.265 ************************************ 00:02:03.265 START TEST ubsan 00:02:03.265 ************************************ 00:02:03.265 using ubsan 00:02:03.265 07:57:25 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:02:03.265 00:02:03.265 real 0m0.000s 00:02:03.265 user 0m0.000s 00:02:03.265 sys 0m0.000s 00:02:03.265 07:57:25 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:03.265 ************************************ 00:02:03.265 END TEST ubsan 00:02:03.265 ************************************ 00:02:03.265 07:57:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:03.265 07:57:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:03.265 07:57:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:03.265 07:57:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:03.265 07:57:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:03.265 07:57:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:03.265 07:57:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:03.265 07:57:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:03.265 07:57:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:03.265 07:57:25 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:02:03.523 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:03.523 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:03.781 Using 'verbs' RDMA provider 00:02:19.743 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:31.944 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:31.944 Creating mk/config.mk...done. 00:02:31.944 Creating mk/cc.flags.mk...done. 00:02:31.944 Type 'make' to build. 00:02:31.944 07:57:53 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:31.944 07:57:53 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:02:31.944 07:57:53 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:02:31.944 07:57:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.944 ************************************ 00:02:31.944 START TEST make 00:02:31.944 ************************************ 00:02:31.944 07:57:53 make -- common/autotest_common.sh@1124 -- $ make -j10 00:02:31.944 make[1]: Nothing to be done for 'all'. 00:02:44.141 The Meson build system 00:02:44.141 Version: 1.3.1 00:02:44.141 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:44.141 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:44.141 Build type: native build 00:02:44.141 Program cat found: YES (/usr/bin/cat) 00:02:44.141 Project name: DPDK 00:02:44.141 Project version: 24.03.0 00:02:44.141 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:44.141 C linker for the host machine: cc ld.bfd 2.39-16 00:02:44.141 Host machine cpu family: x86_64 00:02:44.141 Host machine cpu: x86_64 00:02:44.141 Message: ## Building in Developer Mode ## 00:02:44.141 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:44.141 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:44.141 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:44.141 Program python3 found: YES (/usr/bin/python3) 00:02:44.141 Program cat found: YES (/usr/bin/cat) 00:02:44.141 Compiler for C supports arguments -march=native: YES 00:02:44.141 Checking for size of "void *" : 8 00:02:44.141 Checking for size of "void *" : 8 (cached) 00:02:44.141 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:44.141 Library m found: YES 00:02:44.141 Library numa found: YES 00:02:44.141 Has header "numaif.h" : YES 00:02:44.141 Library fdt found: NO 00:02:44.141 Library execinfo found: NO 00:02:44.141 Has header "execinfo.h" : YES 00:02:44.141 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:44.141 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:44.141 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:44.141 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:44.141 Run-time dependency openssl found: YES 3.0.9 00:02:44.141 Run-time dependency libpcap found: YES 1.10.4 00:02:44.141 Has header "pcap.h" with dependency libpcap: YES 00:02:44.141 Compiler for C supports arguments -Wcast-qual: YES 00:02:44.141 Compiler for C supports arguments -Wdeprecated: YES 00:02:44.141 Compiler for C supports arguments -Wformat: YES 00:02:44.141 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:44.141 Compiler for C supports arguments -Wformat-security: NO 00:02:44.141 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:44.141 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:44.141 Compiler for C supports arguments -Wnested-externs: YES 00:02:44.141 Compiler for C supports arguments -Wold-style-definition: YES 00:02:44.141 Compiler for C supports arguments -Wpointer-arith: YES 00:02:44.141 Compiler for C supports arguments -Wsign-compare: YES 00:02:44.141 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:44.141 Compiler for C supports arguments -Wundef: YES 00:02:44.141 Compiler for C supports arguments -Wwrite-strings: YES 00:02:44.141 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:44.141 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:44.141 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:44.141 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:44.141 Program objdump found: YES (/usr/bin/objdump) 00:02:44.141 Compiler for C supports arguments -mavx512f: YES 00:02:44.141 Checking if "AVX512 checking" compiles: YES 00:02:44.141 Fetching value of define "__SSE4_2__" : 1 00:02:44.141 Fetching value of define "__AES__" : 1 00:02:44.141 Fetching value of define "__AVX__" : 1 00:02:44.141 Fetching value of define "__AVX2__" : 1 00:02:44.141 Fetching value of define "__AVX512BW__" : (undefined) 00:02:44.141 Fetching value of define "__AVX512CD__" : (undefined) 00:02:44.141 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:44.141 Fetching value of define "__AVX512F__" : (undefined) 00:02:44.141 Fetching value of define "__AVX512VL__" : (undefined) 00:02:44.141 Fetching value of define "__PCLMUL__" : 1 00:02:44.141 Fetching value of define "__RDRND__" : 1 00:02:44.141 Fetching value of define "__RDSEED__" : 1 00:02:44.141 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:44.141 Fetching value of define "__znver1__" : (undefined) 00:02:44.141 Fetching value of define "__znver2__" : (undefined) 00:02:44.141 Fetching value of define "__znver3__" : (undefined) 00:02:44.141 Fetching value of define "__znver4__" : (undefined) 00:02:44.141 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:44.141 Message: lib/log: Defining dependency "log" 00:02:44.141 Message: lib/kvargs: Defining dependency "kvargs" 00:02:44.141 Message: lib/telemetry: Defining dependency "telemetry" 00:02:44.141 Checking for function "getentropy" : NO 00:02:44.141 Message: lib/eal: Defining dependency "eal" 00:02:44.141 Message: lib/ring: Defining dependency "ring" 00:02:44.141 Message: lib/rcu: Defining dependency "rcu" 00:02:44.141 Message: lib/mempool: Defining dependency "mempool" 00:02:44.141 Message: lib/mbuf: Defining dependency "mbuf" 00:02:44.141 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:44.141 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:44.141 Compiler for C supports arguments -mpclmul: YES 00:02:44.141 Compiler for C supports arguments -maes: YES 00:02:44.141 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:44.141 Compiler for C supports arguments -mavx512bw: YES 00:02:44.141 Compiler for C supports arguments -mavx512dq: YES 00:02:44.141 Compiler for C supports arguments -mavx512vl: YES 00:02:44.141 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:44.141 Compiler for C supports arguments -mavx2: YES 00:02:44.141 Compiler for C supports arguments -mavx: YES 00:02:44.141 Message: lib/net: Defining dependency "net" 00:02:44.141 Message: lib/meter: Defining dependency "meter" 00:02:44.141 Message: lib/ethdev: Defining dependency "ethdev" 00:02:44.141 Message: lib/pci: Defining dependency "pci" 00:02:44.141 Message: lib/cmdline: Defining dependency "cmdline" 00:02:44.141 Message: lib/hash: Defining dependency "hash" 00:02:44.141 Message: lib/timer: Defining dependency "timer" 00:02:44.141 Message: lib/compressdev: Defining dependency "compressdev" 00:02:44.141 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:44.141 Message: lib/dmadev: Defining dependency "dmadev" 00:02:44.141 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:44.141 Message: lib/power: Defining dependency "power" 00:02:44.141 Message: lib/reorder: Defining dependency "reorder" 00:02:44.141 Message: lib/security: Defining dependency "security" 00:02:44.141 Has header "linux/userfaultfd.h" : YES 00:02:44.141 Has header "linux/vduse.h" : YES 00:02:44.141 Message: lib/vhost: Defining dependency "vhost" 00:02:44.141 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:44.141 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:44.141 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:44.141 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:44.141 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:44.141 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:44.141 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:44.141 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:44.141 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:44.141 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:44.141 Program doxygen found: YES (/usr/bin/doxygen) 00:02:44.141 Configuring doxy-api-html.conf using configuration 00:02:44.141 Configuring doxy-api-man.conf using configuration 00:02:44.141 Program mandb found: YES (/usr/bin/mandb) 00:02:44.141 Program sphinx-build found: NO 00:02:44.141 Configuring rte_build_config.h using configuration 00:02:44.141 Message: 00:02:44.142 ================= 00:02:44.142 Applications Enabled 00:02:44.142 ================= 00:02:44.142 00:02:44.142 apps: 00:02:44.142 00:02:44.142 00:02:44.142 Message: 00:02:44.142 ================= 00:02:44.142 Libraries Enabled 00:02:44.142 ================= 00:02:44.142 00:02:44.142 libs: 00:02:44.142 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:44.142 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:44.142 cryptodev, dmadev, power, reorder, security, vhost, 00:02:44.142 00:02:44.142 Message: 00:02:44.142 =============== 00:02:44.142 Drivers Enabled 00:02:44.142 =============== 00:02:44.142 00:02:44.142 common: 00:02:44.142 00:02:44.142 bus: 00:02:44.142 pci, vdev, 00:02:44.142 mempool: 00:02:44.142 ring, 00:02:44.142 dma: 00:02:44.142 00:02:44.142 net: 00:02:44.142 00:02:44.142 crypto: 00:02:44.142 00:02:44.142 compress: 00:02:44.142 00:02:44.142 vdpa: 00:02:44.142 00:02:44.142 00:02:44.142 Message: 00:02:44.142 ================= 00:02:44.142 Content Skipped 00:02:44.142 ================= 00:02:44.142 00:02:44.142 apps: 00:02:44.142 dumpcap: explicitly disabled via build config 00:02:44.142 graph: explicitly disabled via build config 00:02:44.142 pdump: explicitly disabled via build config 00:02:44.142 proc-info: explicitly disabled via build config 00:02:44.142 test-acl: explicitly disabled via build config 00:02:44.142 test-bbdev: explicitly disabled via build config 00:02:44.142 test-cmdline: explicitly disabled via build config 00:02:44.142 test-compress-perf: explicitly disabled via build config 00:02:44.142 test-crypto-perf: explicitly disabled via build config 00:02:44.142 test-dma-perf: explicitly disabled via build config 00:02:44.142 test-eventdev: explicitly disabled via build config 00:02:44.142 test-fib: explicitly disabled via build config 00:02:44.142 test-flow-perf: explicitly disabled via build config 00:02:44.142 test-gpudev: explicitly disabled via build config 00:02:44.142 test-mldev: explicitly disabled via build config 00:02:44.142 test-pipeline: explicitly disabled via build config 00:02:44.142 test-pmd: explicitly disabled via build config 00:02:44.142 test-regex: explicitly disabled via build config 00:02:44.142 test-sad: explicitly disabled via build config 00:02:44.142 test-security-perf: explicitly disabled via build config 00:02:44.142 00:02:44.142 libs: 00:02:44.142 argparse: explicitly disabled via build config 00:02:44.142 metrics: explicitly disabled via build config 00:02:44.142 acl: explicitly disabled via build config 00:02:44.142 bbdev: explicitly disabled via build config 00:02:44.142 bitratestats: explicitly disabled via build config 00:02:44.142 bpf: explicitly disabled via build config 00:02:44.142 cfgfile: explicitly disabled via build config 00:02:44.142 distributor: explicitly disabled via build config 00:02:44.142 efd: explicitly disabled via build config 00:02:44.142 eventdev: explicitly disabled via build config 00:02:44.142 dispatcher: explicitly disabled via build config 00:02:44.142 gpudev: explicitly disabled via build config 00:02:44.142 gro: explicitly disabled via build config 00:02:44.142 gso: explicitly disabled via build config 00:02:44.142 ip_frag: explicitly disabled via build config 00:02:44.142 jobstats: explicitly disabled via build config 00:02:44.142 latencystats: explicitly disabled via build config 00:02:44.142 lpm: explicitly disabled via build config 00:02:44.142 member: explicitly disabled via build config 00:02:44.142 pcapng: explicitly disabled via build config 00:02:44.142 rawdev: explicitly disabled via build config 00:02:44.142 regexdev: explicitly disabled via build config 00:02:44.142 mldev: explicitly disabled via build config 00:02:44.142 rib: explicitly disabled via build config 00:02:44.142 sched: explicitly disabled via build config 00:02:44.142 stack: explicitly disabled via build config 00:02:44.142 ipsec: explicitly disabled via build config 00:02:44.142 pdcp: explicitly disabled via build config 00:02:44.142 fib: explicitly disabled via build config 00:02:44.142 port: explicitly disabled via build config 00:02:44.142 pdump: explicitly disabled via build config 00:02:44.142 table: explicitly disabled via build config 00:02:44.142 pipeline: explicitly disabled via build config 00:02:44.142 graph: explicitly disabled via build config 00:02:44.142 node: explicitly disabled via build config 00:02:44.142 00:02:44.142 drivers: 00:02:44.142 common/cpt: not in enabled drivers build config 00:02:44.142 common/dpaax: not in enabled drivers build config 00:02:44.142 common/iavf: not in enabled drivers build config 00:02:44.142 common/idpf: not in enabled drivers build config 00:02:44.142 common/ionic: not in enabled drivers build config 00:02:44.142 common/mvep: not in enabled drivers build config 00:02:44.142 common/octeontx: not in enabled drivers build config 00:02:44.142 bus/auxiliary: not in enabled drivers build config 00:02:44.142 bus/cdx: not in enabled drivers build config 00:02:44.142 bus/dpaa: not in enabled drivers build config 00:02:44.142 bus/fslmc: not in enabled drivers build config 00:02:44.142 bus/ifpga: not in enabled drivers build config 00:02:44.142 bus/platform: not in enabled drivers build config 00:02:44.142 bus/uacce: not in enabled drivers build config 00:02:44.142 bus/vmbus: not in enabled drivers build config 00:02:44.142 common/cnxk: not in enabled drivers build config 00:02:44.142 common/mlx5: not in enabled drivers build config 00:02:44.142 common/nfp: not in enabled drivers build config 00:02:44.142 common/nitrox: not in enabled drivers build config 00:02:44.142 common/qat: not in enabled drivers build config 00:02:44.142 common/sfc_efx: not in enabled drivers build config 00:02:44.142 mempool/bucket: not in enabled drivers build config 00:02:44.142 mempool/cnxk: not in enabled drivers build config 00:02:44.142 mempool/dpaa: not in enabled drivers build config 00:02:44.142 mempool/dpaa2: not in enabled drivers build config 00:02:44.142 mempool/octeontx: not in enabled drivers build config 00:02:44.142 mempool/stack: not in enabled drivers build config 00:02:44.142 dma/cnxk: not in enabled drivers build config 00:02:44.142 dma/dpaa: not in enabled drivers build config 00:02:44.142 dma/dpaa2: not in enabled drivers build config 00:02:44.142 dma/hisilicon: not in enabled drivers build config 00:02:44.142 dma/idxd: not in enabled drivers build config 00:02:44.142 dma/ioat: not in enabled drivers build config 00:02:44.142 dma/skeleton: not in enabled drivers build config 00:02:44.142 net/af_packet: not in enabled drivers build config 00:02:44.142 net/af_xdp: not in enabled drivers build config 00:02:44.142 net/ark: not in enabled drivers build config 00:02:44.142 net/atlantic: not in enabled drivers build config 00:02:44.142 net/avp: not in enabled drivers build config 00:02:44.142 net/axgbe: not in enabled drivers build config 00:02:44.142 net/bnx2x: not in enabled drivers build config 00:02:44.142 net/bnxt: not in enabled drivers build config 00:02:44.142 net/bonding: not in enabled drivers build config 00:02:44.142 net/cnxk: not in enabled drivers build config 00:02:44.142 net/cpfl: not in enabled drivers build config 00:02:44.142 net/cxgbe: not in enabled drivers build config 00:02:44.142 net/dpaa: not in enabled drivers build config 00:02:44.142 net/dpaa2: not in enabled drivers build config 00:02:44.142 net/e1000: not in enabled drivers build config 00:02:44.142 net/ena: not in enabled drivers build config 00:02:44.142 net/enetc: not in enabled drivers build config 00:02:44.142 net/enetfec: not in enabled drivers build config 00:02:44.142 net/enic: not in enabled drivers build config 00:02:44.142 net/failsafe: not in enabled drivers build config 00:02:44.142 net/fm10k: not in enabled drivers build config 00:02:44.142 net/gve: not in enabled drivers build config 00:02:44.142 net/hinic: not in enabled drivers build config 00:02:44.142 net/hns3: not in enabled drivers build config 00:02:44.142 net/i40e: not in enabled drivers build config 00:02:44.142 net/iavf: not in enabled drivers build config 00:02:44.142 net/ice: not in enabled drivers build config 00:02:44.142 net/idpf: not in enabled drivers build config 00:02:44.142 net/igc: not in enabled drivers build config 00:02:44.142 net/ionic: not in enabled drivers build config 00:02:44.142 net/ipn3ke: not in enabled drivers build config 00:02:44.142 net/ixgbe: not in enabled drivers build config 00:02:44.142 net/mana: not in enabled drivers build config 00:02:44.142 net/memif: not in enabled drivers build config 00:02:44.142 net/mlx4: not in enabled drivers build config 00:02:44.142 net/mlx5: not in enabled drivers build config 00:02:44.142 net/mvneta: not in enabled drivers build config 00:02:44.142 net/mvpp2: not in enabled drivers build config 00:02:44.142 net/netvsc: not in enabled drivers build config 00:02:44.142 net/nfb: not in enabled drivers build config 00:02:44.142 net/nfp: not in enabled drivers build config 00:02:44.142 net/ngbe: not in enabled drivers build config 00:02:44.142 net/null: not in enabled drivers build config 00:02:44.142 net/octeontx: not in enabled drivers build config 00:02:44.142 net/octeon_ep: not in enabled drivers build config 00:02:44.142 net/pcap: not in enabled drivers build config 00:02:44.142 net/pfe: not in enabled drivers build config 00:02:44.142 net/qede: not in enabled drivers build config 00:02:44.142 net/ring: not in enabled drivers build config 00:02:44.142 net/sfc: not in enabled drivers build config 00:02:44.142 net/softnic: not in enabled drivers build config 00:02:44.142 net/tap: not in enabled drivers build config 00:02:44.142 net/thunderx: not in enabled drivers build config 00:02:44.142 net/txgbe: not in enabled drivers build config 00:02:44.142 net/vdev_netvsc: not in enabled drivers build config 00:02:44.142 net/vhost: not in enabled drivers build config 00:02:44.142 net/virtio: not in enabled drivers build config 00:02:44.142 net/vmxnet3: not in enabled drivers build config 00:02:44.142 raw/*: missing internal dependency, "rawdev" 00:02:44.142 crypto/armv8: not in enabled drivers build config 00:02:44.142 crypto/bcmfs: not in enabled drivers build config 00:02:44.142 crypto/caam_jr: not in enabled drivers build config 00:02:44.142 crypto/ccp: not in enabled drivers build config 00:02:44.142 crypto/cnxk: not in enabled drivers build config 00:02:44.142 crypto/dpaa_sec: not in enabled drivers build config 00:02:44.142 crypto/dpaa2_sec: not in enabled drivers build config 00:02:44.143 crypto/ipsec_mb: not in enabled drivers build config 00:02:44.143 crypto/mlx5: not in enabled drivers build config 00:02:44.143 crypto/mvsam: not in enabled drivers build config 00:02:44.143 crypto/nitrox: not in enabled drivers build config 00:02:44.143 crypto/null: not in enabled drivers build config 00:02:44.143 crypto/octeontx: not in enabled drivers build config 00:02:44.143 crypto/openssl: not in enabled drivers build config 00:02:44.143 crypto/scheduler: not in enabled drivers build config 00:02:44.143 crypto/uadk: not in enabled drivers build config 00:02:44.143 crypto/virtio: not in enabled drivers build config 00:02:44.143 compress/isal: not in enabled drivers build config 00:02:44.143 compress/mlx5: not in enabled drivers build config 00:02:44.143 compress/nitrox: not in enabled drivers build config 00:02:44.143 compress/octeontx: not in enabled drivers build config 00:02:44.143 compress/zlib: not in enabled drivers build config 00:02:44.143 regex/*: missing internal dependency, "regexdev" 00:02:44.143 ml/*: missing internal dependency, "mldev" 00:02:44.143 vdpa/ifc: not in enabled drivers build config 00:02:44.143 vdpa/mlx5: not in enabled drivers build config 00:02:44.143 vdpa/nfp: not in enabled drivers build config 00:02:44.143 vdpa/sfc: not in enabled drivers build config 00:02:44.143 event/*: missing internal dependency, "eventdev" 00:02:44.143 baseband/*: missing internal dependency, "bbdev" 00:02:44.143 gpu/*: missing internal dependency, "gpudev" 00:02:44.143 00:02:44.143 00:02:44.143 Build targets in project: 85 00:02:44.143 00:02:44.143 DPDK 24.03.0 00:02:44.143 00:02:44.143 User defined options 00:02:44.143 buildtype : debug 00:02:44.143 default_library : shared 00:02:44.143 libdir : lib 00:02:44.143 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:44.143 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:44.143 c_link_args : 00:02:44.143 cpu_instruction_set: native 00:02:44.143 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:44.143 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:44.143 enable_docs : false 00:02:44.143 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:44.143 enable_kmods : false 00:02:44.143 tests : false 00:02:44.143 00:02:44.143 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:44.143 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:44.143 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:44.143 [2/268] Linking static target lib/librte_kvargs.a 00:02:44.143 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:44.143 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:44.143 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:44.143 [6/268] Linking static target lib/librte_log.a 00:02:44.143 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.143 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:44.401 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:44.401 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:44.660 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:44.660 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:44.660 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:44.660 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:44.660 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:44.660 [16/268] Linking static target lib/librte_telemetry.a 00:02:44.660 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:44.660 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.660 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:44.919 [20/268] Linking target lib/librte_log.so.24.1 00:02:45.179 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:45.179 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:45.179 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:45.440 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:45.440 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:45.440 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:45.440 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:45.440 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:45.440 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:45.440 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.440 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:45.698 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:45.698 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:45.698 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:45.698 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:45.698 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:45.957 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:45.957 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:45.957 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:46.216 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:46.216 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:46.216 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:46.476 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:46.476 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:46.476 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:46.476 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:46.735 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:46.735 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:46.735 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:46.735 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:46.993 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:46.993 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:47.252 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:47.511 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:47.511 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:47.511 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:47.511 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:47.511 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:47.511 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:47.770 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:47.770 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:47.770 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:47.770 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:48.337 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:48.337 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:48.595 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:48.595 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:48.595 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:48.595 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:48.595 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:48.595 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:48.595 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:48.853 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:48.853 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:48.853 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:48.853 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:49.111 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:49.111 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:49.370 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:49.370 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:49.370 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:49.628 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:49.628 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:49.628 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:49.886 [85/268] Linking static target lib/librte_eal.a 00:02:49.886 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:49.886 [87/268] Linking static target lib/librte_ring.a 00:02:50.145 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:50.145 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:50.145 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:50.145 [91/268] Linking static target lib/librte_rcu.a 00:02:50.408 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:50.408 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:50.408 [94/268] Linking static target lib/librte_mempool.a 00:02:50.408 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.668 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:50.668 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:50.668 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:50.927 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:50.927 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:50.927 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.927 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:50.927 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:51.187 [104/268] Linking static target lib/librte_mbuf.a 00:02:51.187 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:51.187 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:51.187 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:51.445 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:51.445 [109/268] Linking static target lib/librte_meter.a 00:02:51.445 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:51.445 [111/268] Linking static target lib/librte_net.a 00:02:51.703 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.704 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:51.704 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:51.961 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:51.961 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.961 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.961 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:52.219 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.478 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:52.478 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:52.738 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:52.738 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:52.738 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:52.997 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:52.997 [126/268] Linking static target lib/librte_pci.a 00:02:52.997 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:52.997 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:53.256 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:53.256 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:53.256 [131/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.256 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:53.256 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:53.514 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:53.514 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:53.514 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:53.514 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:53.514 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:53.514 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:53.514 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:53.514 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:53.514 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:53.514 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:53.514 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:53.514 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:53.514 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:53.772 [147/268] Linking static target lib/librte_ethdev.a 00:02:54.031 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:54.031 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:54.031 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:54.031 [151/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:54.031 [152/268] Linking static target lib/librte_cmdline.a 00:02:54.289 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:54.547 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:54.547 [155/268] Linking static target lib/librte_timer.a 00:02:54.547 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:54.547 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:54.547 [158/268] Linking static target lib/librte_hash.a 00:02:54.547 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:54.547 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:54.547 [161/268] Linking static target lib/librte_compressdev.a 00:02:54.806 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:54.806 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:55.064 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:55.064 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.322 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:55.322 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:55.322 [168/268] Linking static target lib/librte_dmadev.a 00:02:55.322 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:55.581 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:55.581 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:55.581 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:55.581 [173/268] Linking static target lib/librte_cryptodev.a 00:02:55.581 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.839 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.839 [176/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:55.839 [177/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.097 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:56.356 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:56.356 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.356 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:56.356 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:56.356 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:56.356 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:56.614 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:56.614 [186/268] Linking static target lib/librte_power.a 00:02:56.614 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:56.873 [188/268] Linking static target lib/librte_reorder.a 00:02:56.873 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:57.131 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:57.131 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:57.131 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:57.131 [193/268] Linking static target lib/librte_security.a 00:02:57.389 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.389 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:57.649 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.906 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.907 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:57.907 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:57.907 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:57.907 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:58.165 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.165 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:58.424 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:58.424 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:58.424 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:58.682 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:58.682 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:58.682 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:58.682 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:58.682 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:58.682 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:58.941 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:58.941 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.941 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.941 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:58.941 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:58.941 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:58.941 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:58.941 [220/268] Linking static target drivers/librte_bus_pci.a 00:02:58.941 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:58.941 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:59.199 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:59.199 [224/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.199 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:59.199 [226/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:59.199 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:59.457 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.023 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:00.023 [230/268] Linking static target lib/librte_vhost.a 00:03:01.398 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.398 [232/268] Linking target lib/librte_eal.so.24.1 00:03:01.398 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:01.398 [234/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.398 [235/268] Linking target lib/librte_ring.so.24.1 00:03:01.398 [236/268] Linking target lib/librte_meter.so.24.1 00:03:01.398 [237/268] Linking target lib/librte_dmadev.so.24.1 00:03:01.398 [238/268] Linking target lib/librte_timer.so.24.1 00:03:01.398 [239/268] Linking target lib/librte_pci.so.24.1 00:03:01.398 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:01.656 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:01.656 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:01.656 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:01.656 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:01.656 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:01.656 [246/268] Linking target lib/librte_rcu.so.24.1 00:03:01.656 [247/268] Linking target lib/librte_mempool.so.24.1 00:03:01.656 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:01.656 [249/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.915 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:01.915 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:01.915 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:01.915 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:02.173 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:02.173 [255/268] Linking target lib/librte_reorder.so.24.1 00:03:02.173 [256/268] Linking target lib/librte_net.so.24.1 00:03:02.173 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:03:02.173 [258/268] Linking target lib/librte_compressdev.so.24.1 00:03:02.173 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:02.173 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:02.432 [261/268] Linking target lib/librte_security.so.24.1 00:03:02.432 [262/268] Linking target lib/librte_hash.so.24.1 00:03:02.432 [263/268] Linking target lib/librte_cmdline.so.24.1 00:03:02.432 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:02.432 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:02.432 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:02.690 [267/268] Linking target lib/librte_power.so.24.1 00:03:02.690 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:02.690 INFO: autodetecting backend as ninja 00:03:02.690 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:04.065 CC lib/log/log.o 00:03:04.065 CC lib/log/log_flags.o 00:03:04.065 CC lib/log/log_deprecated.o 00:03:04.065 CC lib/ut/ut.o 00:03:04.065 CC lib/ut_mock/mock.o 00:03:04.065 LIB libspdk_ut.a 00:03:04.065 LIB libspdk_log.a 00:03:04.065 LIB libspdk_ut_mock.a 00:03:04.065 SO libspdk_ut.so.2.0 00:03:04.065 SO libspdk_ut_mock.so.6.0 00:03:04.065 SO libspdk_log.so.7.0 00:03:04.065 SYMLINK libspdk_ut.so 00:03:04.065 SYMLINK libspdk_ut_mock.so 00:03:04.065 SYMLINK libspdk_log.so 00:03:04.323 CC lib/dma/dma.o 00:03:04.323 CC lib/ioat/ioat.o 00:03:04.323 CXX lib/trace_parser/trace.o 00:03:04.323 CC lib/util/base64.o 00:03:04.323 CC lib/util/bit_array.o 00:03:04.323 CC lib/util/crc16.o 00:03:04.323 CC lib/util/cpuset.o 00:03:04.323 CC lib/util/crc32.o 00:03:04.323 CC lib/util/crc32c.o 00:03:04.581 CC lib/vfio_user/host/vfio_user_pci.o 00:03:04.581 CC lib/util/crc32_ieee.o 00:03:04.581 CC lib/util/crc64.o 00:03:04.581 CC lib/vfio_user/host/vfio_user.o 00:03:04.581 CC lib/util/dif.o 00:03:04.581 LIB libspdk_dma.a 00:03:04.581 SO libspdk_dma.so.4.0 00:03:04.581 CC lib/util/fd.o 00:03:04.581 CC lib/util/file.o 00:03:04.581 LIB libspdk_ioat.a 00:03:04.581 SYMLINK libspdk_dma.so 00:03:04.581 CC lib/util/hexlify.o 00:03:04.581 SO libspdk_ioat.so.7.0 00:03:04.581 CC lib/util/iov.o 00:03:04.839 SYMLINK libspdk_ioat.so 00:03:04.839 CC lib/util/math.o 00:03:04.839 CC lib/util/pipe.o 00:03:04.839 CC lib/util/strerror_tls.o 00:03:04.839 CC lib/util/string.o 00:03:04.839 LIB libspdk_vfio_user.a 00:03:04.839 CC lib/util/uuid.o 00:03:04.839 SO libspdk_vfio_user.so.5.0 00:03:04.839 CC lib/util/fd_group.o 00:03:04.839 CC lib/util/xor.o 00:03:04.839 CC lib/util/zipf.o 00:03:04.839 SYMLINK libspdk_vfio_user.so 00:03:05.097 LIB libspdk_util.a 00:03:05.355 SO libspdk_util.so.9.1 00:03:05.355 LIB libspdk_trace_parser.a 00:03:05.355 SO libspdk_trace_parser.so.5.0 00:03:05.355 SYMLINK libspdk_util.so 00:03:05.613 SYMLINK libspdk_trace_parser.so 00:03:05.613 CC lib/json/json_parse.o 00:03:05.613 CC lib/rdma/common.o 00:03:05.613 CC lib/json/json_write.o 00:03:05.613 CC lib/json/json_util.o 00:03:05.613 CC lib/rdma/rdma_verbs.o 00:03:05.613 CC lib/conf/conf.o 00:03:05.613 CC lib/vmd/vmd.o 00:03:05.613 CC lib/env_dpdk/env.o 00:03:05.613 CC lib/vmd/led.o 00:03:05.613 CC lib/idxd/idxd.o 00:03:05.871 CC lib/env_dpdk/memory.o 00:03:05.871 CC lib/env_dpdk/pci.o 00:03:05.871 LIB libspdk_conf.a 00:03:05.871 CC lib/env_dpdk/init.o 00:03:05.871 SO libspdk_conf.so.6.0 00:03:05.871 CC lib/idxd/idxd_user.o 00:03:05.871 SYMLINK libspdk_conf.so 00:03:05.871 CC lib/env_dpdk/threads.o 00:03:06.129 LIB libspdk_json.a 00:03:06.129 LIB libspdk_rdma.a 00:03:06.129 SO libspdk_json.so.6.0 00:03:06.129 SO libspdk_rdma.so.6.0 00:03:06.129 SYMLINK libspdk_rdma.so 00:03:06.129 CC lib/env_dpdk/pci_ioat.o 00:03:06.129 SYMLINK libspdk_json.so 00:03:06.129 CC lib/env_dpdk/pci_virtio.o 00:03:06.129 CC lib/idxd/idxd_kernel.o 00:03:06.129 CC lib/env_dpdk/pci_vmd.o 00:03:06.129 CC lib/env_dpdk/pci_idxd.o 00:03:06.386 CC lib/env_dpdk/pci_event.o 00:03:06.386 CC lib/env_dpdk/sigbus_handler.o 00:03:06.386 LIB libspdk_vmd.a 00:03:06.386 CC lib/env_dpdk/pci_dpdk.o 00:03:06.386 SO libspdk_vmd.so.6.0 00:03:06.386 LIB libspdk_idxd.a 00:03:06.386 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:06.386 SO libspdk_idxd.so.12.0 00:03:06.386 CC lib/jsonrpc/jsonrpc_server.o 00:03:06.386 SYMLINK libspdk_vmd.so 00:03:06.386 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:06.386 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:06.386 SYMLINK libspdk_idxd.so 00:03:06.386 CC lib/jsonrpc/jsonrpc_client.o 00:03:06.386 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:06.644 LIB libspdk_jsonrpc.a 00:03:06.644 SO libspdk_jsonrpc.so.6.0 00:03:06.902 SYMLINK libspdk_jsonrpc.so 00:03:07.160 CC lib/rpc/rpc.o 00:03:07.160 LIB libspdk_env_dpdk.a 00:03:07.160 SO libspdk_env_dpdk.so.14.1 00:03:07.417 LIB libspdk_rpc.a 00:03:07.417 SO libspdk_rpc.so.6.0 00:03:07.417 SYMLINK libspdk_rpc.so 00:03:07.417 SYMLINK libspdk_env_dpdk.so 00:03:07.675 CC lib/keyring/keyring.o 00:03:07.675 CC lib/keyring/keyring_rpc.o 00:03:07.675 CC lib/trace/trace.o 00:03:07.675 CC lib/notify/notify.o 00:03:07.675 CC lib/trace/trace_rpc.o 00:03:07.675 CC lib/notify/notify_rpc.o 00:03:07.675 CC lib/trace/trace_flags.o 00:03:07.933 LIB libspdk_notify.a 00:03:07.933 SO libspdk_notify.so.6.0 00:03:07.933 LIB libspdk_keyring.a 00:03:07.933 LIB libspdk_trace.a 00:03:07.933 SYMLINK libspdk_notify.so 00:03:07.933 SO libspdk_keyring.so.1.0 00:03:07.933 SO libspdk_trace.so.10.0 00:03:07.933 SYMLINK libspdk_keyring.so 00:03:08.191 SYMLINK libspdk_trace.so 00:03:08.449 CC lib/thread/thread.o 00:03:08.449 CC lib/thread/iobuf.o 00:03:08.449 CC lib/sock/sock.o 00:03:08.449 CC lib/sock/sock_rpc.o 00:03:08.722 LIB libspdk_sock.a 00:03:08.722 SO libspdk_sock.so.10.0 00:03:09.006 SYMLINK libspdk_sock.so 00:03:09.264 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:09.264 CC lib/nvme/nvme_ctrlr.o 00:03:09.264 CC lib/nvme/nvme_fabric.o 00:03:09.264 CC lib/nvme/nvme_ns.o 00:03:09.264 CC lib/nvme/nvme_ns_cmd.o 00:03:09.264 CC lib/nvme/nvme_pcie_common.o 00:03:09.264 CC lib/nvme/nvme_qpair.o 00:03:09.264 CC lib/nvme/nvme_pcie.o 00:03:09.264 CC lib/nvme/nvme.o 00:03:09.830 LIB libspdk_thread.a 00:03:09.830 SO libspdk_thread.so.10.1 00:03:09.830 CC lib/nvme/nvme_quirks.o 00:03:10.088 SYMLINK libspdk_thread.so 00:03:10.088 CC lib/nvme/nvme_transport.o 00:03:10.088 CC lib/nvme/nvme_discovery.o 00:03:10.088 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:10.088 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:10.088 CC lib/nvme/nvme_tcp.o 00:03:10.347 CC lib/nvme/nvme_opal.o 00:03:10.347 CC lib/accel/accel.o 00:03:10.347 CC lib/accel/accel_rpc.o 00:03:10.605 CC lib/accel/accel_sw.o 00:03:10.605 CC lib/nvme/nvme_io_msg.o 00:03:10.605 CC lib/nvme/nvme_poll_group.o 00:03:10.605 CC lib/blob/blobstore.o 00:03:10.864 CC lib/blob/request.o 00:03:10.864 CC lib/blob/zeroes.o 00:03:10.864 CC lib/blob/blob_bs_dev.o 00:03:10.864 CC lib/nvme/nvme_zns.o 00:03:10.864 CC lib/nvme/nvme_stubs.o 00:03:11.122 CC lib/nvme/nvme_auth.o 00:03:11.122 CC lib/nvme/nvme_cuse.o 00:03:11.381 LIB libspdk_accel.a 00:03:11.381 CC lib/nvme/nvme_rdma.o 00:03:11.381 SO libspdk_accel.so.15.0 00:03:11.381 SYMLINK libspdk_accel.so 00:03:11.639 CC lib/init/json_config.o 00:03:11.639 CC lib/init/subsystem.o 00:03:11.639 CC lib/virtio/virtio.o 00:03:11.639 CC lib/init/subsystem_rpc.o 00:03:11.639 CC lib/bdev/bdev.o 00:03:11.639 CC lib/bdev/bdev_rpc.o 00:03:11.639 CC lib/bdev/bdev_zone.o 00:03:11.898 CC lib/init/rpc.o 00:03:11.898 CC lib/bdev/part.o 00:03:11.898 CC lib/virtio/virtio_vhost_user.o 00:03:11.898 CC lib/virtio/virtio_vfio_user.o 00:03:11.898 LIB libspdk_init.a 00:03:11.898 CC lib/bdev/scsi_nvme.o 00:03:11.898 CC lib/virtio/virtio_pci.o 00:03:11.898 SO libspdk_init.so.5.0 00:03:12.157 SYMLINK libspdk_init.so 00:03:12.158 LIB libspdk_virtio.a 00:03:12.416 CC lib/event/reactor.o 00:03:12.416 CC lib/event/app.o 00:03:12.416 CC lib/event/app_rpc.o 00:03:12.416 CC lib/event/scheduler_static.o 00:03:12.416 CC lib/event/log_rpc.o 00:03:12.416 SO libspdk_virtio.so.7.0 00:03:12.416 SYMLINK libspdk_virtio.so 00:03:12.675 LIB libspdk_nvme.a 00:03:12.675 LIB libspdk_event.a 00:03:12.675 SO libspdk_event.so.13.1 00:03:12.934 SYMLINK libspdk_event.so 00:03:12.934 SO libspdk_nvme.so.13.0 00:03:13.192 SYMLINK libspdk_nvme.so 00:03:13.760 LIB libspdk_blob.a 00:03:13.760 SO libspdk_blob.so.11.0 00:03:14.019 SYMLINK libspdk_blob.so 00:03:14.019 CC lib/lvol/lvol.o 00:03:14.277 CC lib/blobfs/blobfs.o 00:03:14.277 CC lib/blobfs/tree.o 00:03:14.277 LIB libspdk_bdev.a 00:03:14.277 SO libspdk_bdev.so.15.0 00:03:14.536 SYMLINK libspdk_bdev.so 00:03:14.536 CC lib/nbd/nbd.o 00:03:14.536 CC lib/nbd/nbd_rpc.o 00:03:14.795 CC lib/ublk/ublk.o 00:03:14.795 CC lib/ublk/ublk_rpc.o 00:03:14.795 CC lib/nvmf/ctrlr.o 00:03:14.795 CC lib/nvmf/ctrlr_discovery.o 00:03:14.795 CC lib/scsi/dev.o 00:03:14.795 CC lib/ftl/ftl_core.o 00:03:14.795 CC lib/scsi/lun.o 00:03:14.795 CC lib/scsi/port.o 00:03:15.054 CC lib/scsi/scsi.o 00:03:15.054 LIB libspdk_blobfs.a 00:03:15.054 SO libspdk_blobfs.so.10.0 00:03:15.054 LIB libspdk_nbd.a 00:03:15.054 CC lib/nvmf/ctrlr_bdev.o 00:03:15.054 SYMLINK libspdk_blobfs.so 00:03:15.054 CC lib/nvmf/subsystem.o 00:03:15.054 SO libspdk_nbd.so.7.0 00:03:15.054 CC lib/scsi/scsi_bdev.o 00:03:15.054 CC lib/ftl/ftl_init.o 00:03:15.054 LIB libspdk_lvol.a 00:03:15.313 CC lib/ftl/ftl_layout.o 00:03:15.313 SO libspdk_lvol.so.10.0 00:03:15.313 SYMLINK libspdk_nbd.so 00:03:15.313 CC lib/scsi/scsi_pr.o 00:03:15.313 CC lib/ftl/ftl_debug.o 00:03:15.313 SYMLINK libspdk_lvol.so 00:03:15.313 CC lib/ftl/ftl_io.o 00:03:15.313 CC lib/ftl/ftl_sb.o 00:03:15.313 LIB libspdk_ublk.a 00:03:15.313 SO libspdk_ublk.so.3.0 00:03:15.572 SYMLINK libspdk_ublk.so 00:03:15.572 CC lib/ftl/ftl_l2p.o 00:03:15.572 CC lib/ftl/ftl_l2p_flat.o 00:03:15.572 CC lib/nvmf/nvmf.o 00:03:15.572 CC lib/ftl/ftl_nv_cache.o 00:03:15.572 CC lib/ftl/ftl_band.o 00:03:15.572 CC lib/scsi/scsi_rpc.o 00:03:15.572 CC lib/scsi/task.o 00:03:15.831 CC lib/nvmf/nvmf_rpc.o 00:03:15.831 CC lib/nvmf/transport.o 00:03:15.831 CC lib/nvmf/tcp.o 00:03:15.831 CC lib/nvmf/stubs.o 00:03:15.831 LIB libspdk_scsi.a 00:03:16.090 CC lib/ftl/ftl_band_ops.o 00:03:16.090 SO libspdk_scsi.so.9.0 00:03:16.090 SYMLINK libspdk_scsi.so 00:03:16.090 CC lib/nvmf/mdns_server.o 00:03:16.348 CC lib/nvmf/rdma.o 00:03:16.348 CC lib/ftl/ftl_writer.o 00:03:16.348 CC lib/nvmf/auth.o 00:03:16.607 CC lib/ftl/ftl_rq.o 00:03:16.607 CC lib/ftl/ftl_reloc.o 00:03:16.607 CC lib/ftl/ftl_l2p_cache.o 00:03:16.607 CC lib/ftl/ftl_p2l.o 00:03:16.607 CC lib/ftl/mngt/ftl_mngt.o 00:03:16.607 CC lib/iscsi/conn.o 00:03:16.607 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:16.865 CC lib/vhost/vhost.o 00:03:16.865 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:16.865 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:16.865 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:16.865 CC lib/iscsi/init_grp.o 00:03:17.124 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:17.124 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:17.124 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:17.124 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:17.124 CC lib/iscsi/iscsi.o 00:03:17.124 CC lib/vhost/vhost_rpc.o 00:03:17.124 CC lib/vhost/vhost_scsi.o 00:03:17.124 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:17.383 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:17.383 CC lib/vhost/vhost_blk.o 00:03:17.383 CC lib/iscsi/md5.o 00:03:17.383 CC lib/iscsi/param.o 00:03:17.383 CC lib/iscsi/portal_grp.o 00:03:17.383 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:17.641 CC lib/vhost/rte_vhost_user.o 00:03:17.641 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:17.641 CC lib/iscsi/tgt_node.o 00:03:17.641 CC lib/iscsi/iscsi_subsystem.o 00:03:17.900 CC lib/iscsi/iscsi_rpc.o 00:03:17.900 CC lib/iscsi/task.o 00:03:17.900 CC lib/ftl/utils/ftl_conf.o 00:03:18.159 CC lib/ftl/utils/ftl_md.o 00:03:18.159 CC lib/ftl/utils/ftl_mempool.o 00:03:18.159 CC lib/ftl/utils/ftl_bitmap.o 00:03:18.159 CC lib/ftl/utils/ftl_property.o 00:03:18.159 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:18.159 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:18.159 LIB libspdk_nvmf.a 00:03:18.418 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:18.418 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:18.418 SO libspdk_nvmf.so.19.0 00:03:18.418 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:18.418 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:18.418 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:18.418 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:18.418 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:18.676 SYMLINK libspdk_nvmf.so 00:03:18.676 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:18.676 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:18.676 CC lib/ftl/base/ftl_base_dev.o 00:03:18.676 LIB libspdk_iscsi.a 00:03:18.676 CC lib/ftl/base/ftl_base_bdev.o 00:03:18.676 CC lib/ftl/ftl_trace.o 00:03:18.676 LIB libspdk_vhost.a 00:03:18.676 SO libspdk_iscsi.so.8.0 00:03:18.676 SO libspdk_vhost.so.8.0 00:03:18.935 SYMLINK libspdk_vhost.so 00:03:18.935 SYMLINK libspdk_iscsi.so 00:03:18.935 LIB libspdk_ftl.a 00:03:19.194 SO libspdk_ftl.so.9.0 00:03:19.453 SYMLINK libspdk_ftl.so 00:03:20.021 CC module/env_dpdk/env_dpdk_rpc.o 00:03:20.021 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:20.021 CC module/accel/dsa/accel_dsa.o 00:03:20.021 CC module/accel/ioat/accel_ioat.o 00:03:20.021 CC module/accel/error/accel_error.o 00:03:20.021 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:20.021 CC module/accel/iaa/accel_iaa.o 00:03:20.021 CC module/keyring/file/keyring.o 00:03:20.021 CC module/blob/bdev/blob_bdev.o 00:03:20.021 CC module/sock/posix/posix.o 00:03:20.021 LIB libspdk_env_dpdk_rpc.a 00:03:20.021 SO libspdk_env_dpdk_rpc.so.6.0 00:03:20.279 SYMLINK libspdk_env_dpdk_rpc.so 00:03:20.279 CC module/accel/error/accel_error_rpc.o 00:03:20.279 LIB libspdk_scheduler_dpdk_governor.a 00:03:20.279 CC module/keyring/file/keyring_rpc.o 00:03:20.279 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:20.279 CC module/accel/ioat/accel_ioat_rpc.o 00:03:20.279 LIB libspdk_scheduler_dynamic.a 00:03:20.279 CC module/accel/iaa/accel_iaa_rpc.o 00:03:20.279 SO libspdk_scheduler_dynamic.so.4.0 00:03:20.279 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:20.279 LIB libspdk_blob_bdev.a 00:03:20.279 CC module/accel/dsa/accel_dsa_rpc.o 00:03:20.279 SO libspdk_blob_bdev.so.11.0 00:03:20.279 SYMLINK libspdk_scheduler_dynamic.so 00:03:20.279 LIB libspdk_accel_error.a 00:03:20.279 LIB libspdk_keyring_file.a 00:03:20.537 LIB libspdk_accel_ioat.a 00:03:20.537 SO libspdk_accel_error.so.2.0 00:03:20.537 LIB libspdk_accel_iaa.a 00:03:20.537 SO libspdk_keyring_file.so.1.0 00:03:20.537 CC module/sock/uring/uring.o 00:03:20.537 SYMLINK libspdk_blob_bdev.so 00:03:20.537 SO libspdk_accel_iaa.so.3.0 00:03:20.537 SO libspdk_accel_ioat.so.6.0 00:03:20.537 LIB libspdk_accel_dsa.a 00:03:20.537 SYMLINK libspdk_accel_error.so 00:03:20.537 SYMLINK libspdk_keyring_file.so 00:03:20.537 SO libspdk_accel_dsa.so.5.0 00:03:20.537 SYMLINK libspdk_accel_iaa.so 00:03:20.537 CC module/keyring/linux/keyring.o 00:03:20.537 CC module/keyring/linux/keyring_rpc.o 00:03:20.537 SYMLINK libspdk_accel_ioat.so 00:03:20.537 CC module/scheduler/gscheduler/gscheduler.o 00:03:20.537 SYMLINK libspdk_accel_dsa.so 00:03:20.796 LIB libspdk_keyring_linux.a 00:03:20.796 SO libspdk_keyring_linux.so.1.0 00:03:20.796 LIB libspdk_scheduler_gscheduler.a 00:03:20.796 CC module/bdev/error/vbdev_error.o 00:03:20.796 CC module/blobfs/bdev/blobfs_bdev.o 00:03:20.796 SO libspdk_scheduler_gscheduler.so.4.0 00:03:20.796 CC module/bdev/gpt/gpt.o 00:03:20.796 LIB libspdk_sock_posix.a 00:03:20.796 CC module/bdev/lvol/vbdev_lvol.o 00:03:20.796 CC module/bdev/delay/vbdev_delay.o 00:03:20.796 SYMLINK libspdk_keyring_linux.so 00:03:20.796 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:20.796 SO libspdk_sock_posix.so.6.0 00:03:20.796 SYMLINK libspdk_scheduler_gscheduler.so 00:03:20.796 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:20.796 CC module/bdev/malloc/bdev_malloc.o 00:03:21.054 SYMLINK libspdk_sock_posix.so 00:03:21.054 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:21.054 CC module/bdev/error/vbdev_error_rpc.o 00:03:21.054 CC module/bdev/gpt/vbdev_gpt.o 00:03:21.054 LIB libspdk_sock_uring.a 00:03:21.054 CC module/bdev/null/bdev_null.o 00:03:21.054 SO libspdk_sock_uring.so.5.0 00:03:21.313 LIB libspdk_blobfs_bdev.a 00:03:21.313 SO libspdk_blobfs_bdev.so.6.0 00:03:21.313 SYMLINK libspdk_sock_uring.so 00:03:21.313 LIB libspdk_bdev_error.a 00:03:21.313 CC module/bdev/nvme/bdev_nvme.o 00:03:21.313 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:21.313 LIB libspdk_bdev_delay.a 00:03:21.313 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:21.313 SO libspdk_bdev_error.so.6.0 00:03:21.313 SYMLINK libspdk_blobfs_bdev.so 00:03:21.313 LIB libspdk_bdev_lvol.a 00:03:21.313 CC module/bdev/null/bdev_null_rpc.o 00:03:21.313 SO libspdk_bdev_delay.so.6.0 00:03:21.313 SYMLINK libspdk_bdev_error.so 00:03:21.313 SO libspdk_bdev_lvol.so.6.0 00:03:21.313 LIB libspdk_bdev_gpt.a 00:03:21.313 SYMLINK libspdk_bdev_delay.so 00:03:21.313 SO libspdk_bdev_gpt.so.6.0 00:03:21.313 SYMLINK libspdk_bdev_lvol.so 00:03:21.572 CC module/bdev/passthru/vbdev_passthru.o 00:03:21.572 SYMLINK libspdk_bdev_gpt.so 00:03:21.572 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:21.572 LIB libspdk_bdev_null.a 00:03:21.572 LIB libspdk_bdev_malloc.a 00:03:21.572 SO libspdk_bdev_null.so.6.0 00:03:21.572 SO libspdk_bdev_malloc.so.6.0 00:03:21.572 CC module/bdev/split/vbdev_split.o 00:03:21.572 SYMLINK libspdk_bdev_null.so 00:03:21.572 CC module/bdev/raid/bdev_raid.o 00:03:21.572 SYMLINK libspdk_bdev_malloc.so 00:03:21.572 CC module/bdev/split/vbdev_split_rpc.o 00:03:21.572 CC module/bdev/nvme/nvme_rpc.o 00:03:21.572 CC module/bdev/uring/bdev_uring.o 00:03:21.572 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:21.830 LIB libspdk_bdev_passthru.a 00:03:21.830 SO libspdk_bdev_passthru.so.6.0 00:03:21.830 CC module/bdev/aio/bdev_aio.o 00:03:21.830 CC module/bdev/aio/bdev_aio_rpc.o 00:03:21.830 SYMLINK libspdk_bdev_passthru.so 00:03:21.830 LIB libspdk_bdev_split.a 00:03:21.830 CC module/bdev/uring/bdev_uring_rpc.o 00:03:21.830 SO libspdk_bdev_split.so.6.0 00:03:21.830 CC module/bdev/raid/bdev_raid_rpc.o 00:03:22.089 SYMLINK libspdk_bdev_split.so 00:03:22.089 CC module/bdev/raid/bdev_raid_sb.o 00:03:22.089 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:22.089 CC module/bdev/nvme/bdev_mdns_client.o 00:03:22.089 LIB libspdk_bdev_uring.a 00:03:22.089 CC module/bdev/ftl/bdev_ftl.o 00:03:22.089 SO libspdk_bdev_uring.so.6.0 00:03:22.089 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:22.089 LIB libspdk_bdev_aio.a 00:03:22.349 LIB libspdk_bdev_zone_block.a 00:03:22.349 CC module/bdev/iscsi/bdev_iscsi.o 00:03:22.349 SO libspdk_bdev_aio.so.6.0 00:03:22.349 SYMLINK libspdk_bdev_uring.so 00:03:22.349 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:22.349 SO libspdk_bdev_zone_block.so.6.0 00:03:22.349 CC module/bdev/raid/raid0.o 00:03:22.349 SYMLINK libspdk_bdev_aio.so 00:03:22.349 CC module/bdev/raid/raid1.o 00:03:22.349 SYMLINK libspdk_bdev_zone_block.so 00:03:22.349 CC module/bdev/raid/concat.o 00:03:22.349 CC module/bdev/nvme/vbdev_opal.o 00:03:22.349 LIB libspdk_bdev_ftl.a 00:03:22.349 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:22.349 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:22.349 SO libspdk_bdev_ftl.so.6.0 00:03:22.607 SYMLINK libspdk_bdev_ftl.so 00:03:22.607 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:22.607 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:22.607 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:22.607 LIB libspdk_bdev_raid.a 00:03:22.607 LIB libspdk_bdev_iscsi.a 00:03:22.607 SO libspdk_bdev_iscsi.so.6.0 00:03:22.607 SO libspdk_bdev_raid.so.6.0 00:03:22.878 SYMLINK libspdk_bdev_iscsi.so 00:03:22.878 SYMLINK libspdk_bdev_raid.so 00:03:22.878 LIB libspdk_bdev_virtio.a 00:03:23.156 SO libspdk_bdev_virtio.so.6.0 00:03:23.156 SYMLINK libspdk_bdev_virtio.so 00:03:23.415 LIB libspdk_bdev_nvme.a 00:03:23.674 SO libspdk_bdev_nvme.so.7.0 00:03:23.674 SYMLINK libspdk_bdev_nvme.so 00:03:24.240 CC module/event/subsystems/scheduler/scheduler.o 00:03:24.240 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:24.241 CC module/event/subsystems/keyring/keyring.o 00:03:24.241 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:24.241 CC module/event/subsystems/iobuf/iobuf.o 00:03:24.241 CC module/event/subsystems/vmd/vmd.o 00:03:24.241 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:24.241 CC module/event/subsystems/sock/sock.o 00:03:24.241 LIB libspdk_event_scheduler.a 00:03:24.241 LIB libspdk_event_keyring.a 00:03:24.241 LIB libspdk_event_vhost_blk.a 00:03:24.241 SO libspdk_event_scheduler.so.4.0 00:03:24.241 LIB libspdk_event_vmd.a 00:03:24.241 LIB libspdk_event_sock.a 00:03:24.241 SO libspdk_event_vhost_blk.so.3.0 00:03:24.241 SO libspdk_event_keyring.so.1.0 00:03:24.499 LIB libspdk_event_iobuf.a 00:03:24.499 SO libspdk_event_vmd.so.6.0 00:03:24.499 SYMLINK libspdk_event_scheduler.so 00:03:24.499 SO libspdk_event_sock.so.5.0 00:03:24.499 SYMLINK libspdk_event_vhost_blk.so 00:03:24.499 SO libspdk_event_iobuf.so.3.0 00:03:24.499 SYMLINK libspdk_event_keyring.so 00:03:24.499 SYMLINK libspdk_event_vmd.so 00:03:24.499 SYMLINK libspdk_event_sock.so 00:03:24.499 SYMLINK libspdk_event_iobuf.so 00:03:24.758 CC module/event/subsystems/accel/accel.o 00:03:25.016 LIB libspdk_event_accel.a 00:03:25.016 SO libspdk_event_accel.so.6.0 00:03:25.016 SYMLINK libspdk_event_accel.so 00:03:25.275 CC module/event/subsystems/bdev/bdev.o 00:03:25.534 LIB libspdk_event_bdev.a 00:03:25.534 SO libspdk_event_bdev.so.6.0 00:03:25.793 SYMLINK libspdk_event_bdev.so 00:03:26.051 CC module/event/subsystems/scsi/scsi.o 00:03:26.051 CC module/event/subsystems/nbd/nbd.o 00:03:26.051 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:26.051 CC module/event/subsystems/ublk/ublk.o 00:03:26.051 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:26.051 LIB libspdk_event_ublk.a 00:03:26.051 LIB libspdk_event_nbd.a 00:03:26.051 SO libspdk_event_ublk.so.3.0 00:03:26.051 SO libspdk_event_nbd.so.6.0 00:03:26.051 LIB libspdk_event_scsi.a 00:03:26.051 SO libspdk_event_scsi.so.6.0 00:03:26.051 SYMLINK libspdk_event_ublk.so 00:03:26.051 SYMLINK libspdk_event_nbd.so 00:03:26.309 LIB libspdk_event_nvmf.a 00:03:26.309 SYMLINK libspdk_event_scsi.so 00:03:26.309 SO libspdk_event_nvmf.so.6.0 00:03:26.309 SYMLINK libspdk_event_nvmf.so 00:03:26.568 CC module/event/subsystems/iscsi/iscsi.o 00:03:26.568 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:26.568 LIB libspdk_event_vhost_scsi.a 00:03:26.568 LIB libspdk_event_iscsi.a 00:03:26.827 SO libspdk_event_vhost_scsi.so.3.0 00:03:26.827 SO libspdk_event_iscsi.so.6.0 00:03:26.827 SYMLINK libspdk_event_vhost_scsi.so 00:03:26.827 SYMLINK libspdk_event_iscsi.so 00:03:26.827 SO libspdk.so.6.0 00:03:26.827 SYMLINK libspdk.so 00:03:27.086 CXX app/trace/trace.o 00:03:27.086 CC app/trace_record/trace_record.o 00:03:27.344 CC app/nvmf_tgt/nvmf_main.o 00:03:27.344 CC app/spdk_tgt/spdk_tgt.o 00:03:27.344 CC app/iscsi_tgt/iscsi_tgt.o 00:03:27.344 CC examples/accel/perf/accel_perf.o 00:03:27.344 CC test/accel/dif/dif.o 00:03:27.344 CC test/bdev/bdevio/bdevio.o 00:03:27.344 CC test/app/bdev_svc/bdev_svc.o 00:03:27.344 CC test/blobfs/mkfs/mkfs.o 00:03:27.344 LINK nvmf_tgt 00:03:27.602 LINK spdk_tgt 00:03:27.602 LINK iscsi_tgt 00:03:27.602 LINK spdk_trace_record 00:03:27.602 LINK bdev_svc 00:03:27.602 LINK mkfs 00:03:27.602 LINK spdk_trace 00:03:27.861 CC app/spdk_lspci/spdk_lspci.o 00:03:27.861 LINK accel_perf 00:03:27.861 LINK bdevio 00:03:27.861 LINK dif 00:03:27.861 CC app/spdk_nvme_perf/perf.o 00:03:27.861 CC test/app/histogram_perf/histogram_perf.o 00:03:27.861 CC test/app/jsoncat/jsoncat.o 00:03:27.861 LINK spdk_lspci 00:03:27.861 CC examples/bdev/hello_world/hello_bdev.o 00:03:27.861 CC test/app/stub/stub.o 00:03:27.861 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:28.120 LINK histogram_perf 00:03:28.120 LINK jsoncat 00:03:28.120 LINK stub 00:03:28.120 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:28.120 LINK hello_bdev 00:03:28.120 CC examples/blob/hello_world/hello_blob.o 00:03:28.121 CC examples/ioat/perf/perf.o 00:03:28.121 CC examples/nvme/hello_world/hello_world.o 00:03:28.379 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:28.379 LINK nvme_fuzz 00:03:28.379 CC examples/blob/cli/blobcli.o 00:03:28.379 CC app/spdk_nvme_identify/identify.o 00:03:28.379 LINK ioat_perf 00:03:28.379 LINK hello_blob 00:03:28.379 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:28.379 LINK hello_world 00:03:28.638 CC examples/bdev/bdevperf/bdevperf.o 00:03:28.638 CC examples/ioat/verify/verify.o 00:03:28.638 CC app/spdk_nvme_discover/discovery_aer.o 00:03:28.638 CC app/spdk_top/spdk_top.o 00:03:28.638 LINK spdk_nvme_perf 00:03:28.918 CC examples/nvme/reconnect/reconnect.o 00:03:28.918 LINK blobcli 00:03:28.918 LINK vhost_fuzz 00:03:28.918 LINK verify 00:03:28.918 LINK spdk_nvme_discover 00:03:29.177 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:29.177 TEST_HEADER include/spdk/accel.h 00:03:29.177 TEST_HEADER include/spdk/accel_module.h 00:03:29.177 TEST_HEADER include/spdk/assert.h 00:03:29.177 TEST_HEADER include/spdk/barrier.h 00:03:29.177 TEST_HEADER include/spdk/base64.h 00:03:29.177 TEST_HEADER include/spdk/bdev.h 00:03:29.177 TEST_HEADER include/spdk/bdev_module.h 00:03:29.177 TEST_HEADER include/spdk/bdev_zone.h 00:03:29.177 TEST_HEADER include/spdk/bit_array.h 00:03:29.177 TEST_HEADER include/spdk/bit_pool.h 00:03:29.177 TEST_HEADER include/spdk/blob_bdev.h 00:03:29.177 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:29.177 TEST_HEADER include/spdk/blobfs.h 00:03:29.177 TEST_HEADER include/spdk/blob.h 00:03:29.177 TEST_HEADER include/spdk/conf.h 00:03:29.177 TEST_HEADER include/spdk/config.h 00:03:29.177 TEST_HEADER include/spdk/cpuset.h 00:03:29.177 TEST_HEADER include/spdk/crc16.h 00:03:29.177 TEST_HEADER include/spdk/crc32.h 00:03:29.177 TEST_HEADER include/spdk/crc64.h 00:03:29.177 TEST_HEADER include/spdk/dif.h 00:03:29.177 TEST_HEADER include/spdk/dma.h 00:03:29.177 TEST_HEADER include/spdk/endian.h 00:03:29.177 TEST_HEADER include/spdk/env_dpdk.h 00:03:29.177 TEST_HEADER include/spdk/env.h 00:03:29.177 TEST_HEADER include/spdk/event.h 00:03:29.177 TEST_HEADER include/spdk/fd_group.h 00:03:29.177 TEST_HEADER include/spdk/fd.h 00:03:29.177 LINK spdk_nvme_identify 00:03:29.177 LINK reconnect 00:03:29.177 TEST_HEADER include/spdk/file.h 00:03:29.177 TEST_HEADER include/spdk/ftl.h 00:03:29.177 TEST_HEADER include/spdk/gpt_spec.h 00:03:29.177 TEST_HEADER include/spdk/hexlify.h 00:03:29.177 TEST_HEADER include/spdk/histogram_data.h 00:03:29.177 CC app/vhost/vhost.o 00:03:29.177 TEST_HEADER include/spdk/idxd.h 00:03:29.177 TEST_HEADER include/spdk/idxd_spec.h 00:03:29.177 TEST_HEADER include/spdk/init.h 00:03:29.177 TEST_HEADER include/spdk/ioat.h 00:03:29.177 TEST_HEADER include/spdk/ioat_spec.h 00:03:29.177 TEST_HEADER include/spdk/iscsi_spec.h 00:03:29.177 TEST_HEADER include/spdk/json.h 00:03:29.177 TEST_HEADER include/spdk/jsonrpc.h 00:03:29.177 TEST_HEADER include/spdk/keyring.h 00:03:29.177 TEST_HEADER include/spdk/keyring_module.h 00:03:29.177 TEST_HEADER include/spdk/likely.h 00:03:29.177 TEST_HEADER include/spdk/log.h 00:03:29.177 TEST_HEADER include/spdk/lvol.h 00:03:29.177 TEST_HEADER include/spdk/memory.h 00:03:29.177 TEST_HEADER include/spdk/mmio.h 00:03:29.177 TEST_HEADER include/spdk/nbd.h 00:03:29.177 TEST_HEADER include/spdk/notify.h 00:03:29.177 TEST_HEADER include/spdk/nvme.h 00:03:29.177 TEST_HEADER include/spdk/nvme_intel.h 00:03:29.177 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:29.177 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:29.177 TEST_HEADER include/spdk/nvme_spec.h 00:03:29.177 TEST_HEADER include/spdk/nvme_zns.h 00:03:29.177 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:29.177 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:29.177 TEST_HEADER include/spdk/nvmf.h 00:03:29.177 TEST_HEADER include/spdk/nvmf_spec.h 00:03:29.177 TEST_HEADER include/spdk/nvmf_transport.h 00:03:29.177 TEST_HEADER include/spdk/opal.h 00:03:29.177 TEST_HEADER include/spdk/opal_spec.h 00:03:29.177 TEST_HEADER include/spdk/pci_ids.h 00:03:29.177 TEST_HEADER include/spdk/pipe.h 00:03:29.177 TEST_HEADER include/spdk/queue.h 00:03:29.177 TEST_HEADER include/spdk/reduce.h 00:03:29.177 TEST_HEADER include/spdk/rpc.h 00:03:29.177 TEST_HEADER include/spdk/scheduler.h 00:03:29.177 TEST_HEADER include/spdk/scsi.h 00:03:29.177 TEST_HEADER include/spdk/scsi_spec.h 00:03:29.177 TEST_HEADER include/spdk/sock.h 00:03:29.177 TEST_HEADER include/spdk/stdinc.h 00:03:29.177 TEST_HEADER include/spdk/string.h 00:03:29.177 TEST_HEADER include/spdk/thread.h 00:03:29.177 TEST_HEADER include/spdk/trace.h 00:03:29.177 TEST_HEADER include/spdk/trace_parser.h 00:03:29.177 TEST_HEADER include/spdk/tree.h 00:03:29.177 TEST_HEADER include/spdk/ublk.h 00:03:29.177 TEST_HEADER include/spdk/util.h 00:03:29.177 TEST_HEADER include/spdk/uuid.h 00:03:29.177 TEST_HEADER include/spdk/version.h 00:03:29.177 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:29.177 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:29.177 TEST_HEADER include/spdk/vhost.h 00:03:29.177 TEST_HEADER include/spdk/vmd.h 00:03:29.177 TEST_HEADER include/spdk/xor.h 00:03:29.177 TEST_HEADER include/spdk/zipf.h 00:03:29.177 CXX test/cpp_headers/accel.o 00:03:29.435 CC test/dma/test_dma/test_dma.o 00:03:29.435 LINK vhost 00:03:29.435 LINK bdevperf 00:03:29.435 CXX test/cpp_headers/accel_module.o 00:03:29.435 CC test/env/mem_callbacks/mem_callbacks.o 00:03:29.435 CC test/event/event_perf/event_perf.o 00:03:29.693 CC test/lvol/esnap/esnap.o 00:03:29.693 LINK nvme_manage 00:03:29.693 LINK spdk_top 00:03:29.694 CXX test/cpp_headers/assert.o 00:03:29.694 LINK event_perf 00:03:29.694 LINK test_dma 00:03:29.694 CC app/spdk_dd/spdk_dd.o 00:03:29.694 CXX test/cpp_headers/barrier.o 00:03:29.952 CC examples/nvme/arbitration/arbitration.o 00:03:29.952 LINK iscsi_fuzz 00:03:29.952 CC test/event/reactor_perf/reactor_perf.o 00:03:29.952 CC test/event/reactor/reactor.o 00:03:29.952 CC app/fio/nvme/fio_plugin.o 00:03:29.952 CXX test/cpp_headers/base64.o 00:03:29.952 LINK reactor_perf 00:03:29.952 LINK reactor 00:03:30.210 CC test/nvme/aer/aer.o 00:03:30.210 CXX test/cpp_headers/bdev.o 00:03:30.210 LINK mem_callbacks 00:03:30.210 LINK arbitration 00:03:30.210 CC test/nvme/reset/reset.o 00:03:30.210 LINK spdk_dd 00:03:30.467 CC test/event/app_repeat/app_repeat.o 00:03:30.467 CC app/fio/bdev/fio_plugin.o 00:03:30.467 CXX test/cpp_headers/bdev_module.o 00:03:30.467 CC test/env/vtophys/vtophys.o 00:03:30.467 LINK reset 00:03:30.467 CC examples/nvme/hotplug/hotplug.o 00:03:30.467 LINK app_repeat 00:03:30.467 LINK spdk_nvme 00:03:30.726 CC test/rpc_client/rpc_client_test.o 00:03:30.726 LINK vtophys 00:03:30.726 CXX test/cpp_headers/bdev_zone.o 00:03:30.726 LINK aer 00:03:30.726 LINK hotplug 00:03:30.726 CC test/nvme/sgl/sgl.o 00:03:30.726 LINK rpc_client_test 00:03:30.726 CXX test/cpp_headers/bit_array.o 00:03:30.986 CC test/thread/poller_perf/poller_perf.o 00:03:30.986 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:30.986 CXX test/cpp_headers/bit_pool.o 00:03:30.986 LINK spdk_bdev 00:03:30.986 CC test/event/scheduler/scheduler.o 00:03:30.986 CXX test/cpp_headers/blob_bdev.o 00:03:30.986 LINK poller_perf 00:03:30.986 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:30.986 LINK env_dpdk_post_init 00:03:31.244 LINK sgl 00:03:31.244 CXX test/cpp_headers/blobfs_bdev.o 00:03:31.244 CC examples/sock/hello_world/hello_sock.o 00:03:31.244 LINK cmb_copy 00:03:31.244 LINK scheduler 00:03:31.244 CC examples/vmd/lsvmd/lsvmd.o 00:03:31.244 CC test/env/memory/memory_ut.o 00:03:31.244 CC examples/util/zipf/zipf.o 00:03:31.503 CC examples/nvmf/nvmf/nvmf.o 00:03:31.503 CC test/nvme/e2edp/nvme_dp.o 00:03:31.503 LINK hello_sock 00:03:31.503 LINK lsvmd 00:03:31.503 CC examples/nvme/abort/abort.o 00:03:31.503 CXX test/cpp_headers/blobfs.o 00:03:31.503 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:31.503 LINK zipf 00:03:31.762 CXX test/cpp_headers/blob.o 00:03:31.762 LINK nvmf 00:03:31.762 CC examples/vmd/led/led.o 00:03:31.762 LINK pmr_persistence 00:03:31.762 LINK nvme_dp 00:03:31.762 CC examples/thread/thread/thread_ex.o 00:03:31.762 CXX test/cpp_headers/conf.o 00:03:31.762 LINK abort 00:03:31.762 CXX test/cpp_headers/config.o 00:03:32.021 CC examples/idxd/perf/perf.o 00:03:32.021 CC test/nvme/overhead/overhead.o 00:03:32.021 LINK led 00:03:32.021 CXX test/cpp_headers/cpuset.o 00:03:32.021 CC test/env/pci/pci_ut.o 00:03:32.021 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:32.021 CC test/nvme/err_injection/err_injection.o 00:03:32.021 LINK thread 00:03:32.280 CXX test/cpp_headers/crc16.o 00:03:32.280 LINK interrupt_tgt 00:03:32.280 LINK idxd_perf 00:03:32.280 LINK err_injection 00:03:32.280 LINK overhead 00:03:32.280 CXX test/cpp_headers/crc32.o 00:03:32.538 CC test/nvme/startup/startup.o 00:03:32.538 CC test/nvme/reserve/reserve.o 00:03:32.539 CXX test/cpp_headers/crc64.o 00:03:32.539 LINK pci_ut 00:03:32.539 LINK memory_ut 00:03:32.539 CXX test/cpp_headers/dif.o 00:03:32.539 CXX test/cpp_headers/dma.o 00:03:32.539 CXX test/cpp_headers/endian.o 00:03:32.539 CXX test/cpp_headers/env_dpdk.o 00:03:32.539 CXX test/cpp_headers/env.o 00:03:32.539 LINK startup 00:03:32.797 CXX test/cpp_headers/event.o 00:03:32.797 CXX test/cpp_headers/fd_group.o 00:03:32.797 LINK reserve 00:03:32.797 CC test/nvme/simple_copy/simple_copy.o 00:03:32.797 CC test/nvme/connect_stress/connect_stress.o 00:03:32.797 CC test/nvme/boot_partition/boot_partition.o 00:03:32.797 CC test/nvme/compliance/nvme_compliance.o 00:03:32.797 CC test/nvme/fused_ordering/fused_ordering.o 00:03:32.797 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:33.056 CXX test/cpp_headers/fd.o 00:03:33.056 CC test/nvme/fdp/fdp.o 00:03:33.056 LINK boot_partition 00:03:33.056 CC test/nvme/cuse/cuse.o 00:03:33.056 LINK connect_stress 00:03:33.056 LINK simple_copy 00:03:33.056 LINK doorbell_aers 00:03:33.056 LINK fused_ordering 00:03:33.056 CXX test/cpp_headers/file.o 00:03:33.314 CXX test/cpp_headers/ftl.o 00:03:33.314 LINK nvme_compliance 00:03:33.314 CXX test/cpp_headers/gpt_spec.o 00:03:33.314 CXX test/cpp_headers/hexlify.o 00:03:33.314 CXX test/cpp_headers/histogram_data.o 00:03:33.314 CXX test/cpp_headers/idxd.o 00:03:33.314 LINK fdp 00:03:33.314 CXX test/cpp_headers/idxd_spec.o 00:03:33.314 CXX test/cpp_headers/init.o 00:03:33.314 CXX test/cpp_headers/ioat.o 00:03:33.314 CXX test/cpp_headers/ioat_spec.o 00:03:33.314 CXX test/cpp_headers/iscsi_spec.o 00:03:33.314 CXX test/cpp_headers/json.o 00:03:33.573 CXX test/cpp_headers/jsonrpc.o 00:03:33.573 CXX test/cpp_headers/keyring.o 00:03:33.573 CXX test/cpp_headers/keyring_module.o 00:03:33.573 CXX test/cpp_headers/likely.o 00:03:33.573 CXX test/cpp_headers/log.o 00:03:33.573 CXX test/cpp_headers/lvol.o 00:03:33.573 CXX test/cpp_headers/memory.o 00:03:33.573 CXX test/cpp_headers/mmio.o 00:03:33.573 CXX test/cpp_headers/nbd.o 00:03:33.573 CXX test/cpp_headers/notify.o 00:03:33.573 CXX test/cpp_headers/nvme.o 00:03:33.573 CXX test/cpp_headers/nvme_intel.o 00:03:33.832 CXX test/cpp_headers/nvme_ocssd.o 00:03:33.832 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:33.832 CXX test/cpp_headers/nvme_spec.o 00:03:33.832 CXX test/cpp_headers/nvme_zns.o 00:03:33.832 CXX test/cpp_headers/nvmf_cmd.o 00:03:33.832 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:33.832 CXX test/cpp_headers/nvmf.o 00:03:33.832 CXX test/cpp_headers/nvmf_spec.o 00:03:33.832 CXX test/cpp_headers/nvmf_transport.o 00:03:33.832 CXX test/cpp_headers/opal.o 00:03:34.091 CXX test/cpp_headers/opal_spec.o 00:03:34.091 CXX test/cpp_headers/pci_ids.o 00:03:34.091 CXX test/cpp_headers/pipe.o 00:03:34.091 CXX test/cpp_headers/queue.o 00:03:34.091 CXX test/cpp_headers/reduce.o 00:03:34.091 CXX test/cpp_headers/rpc.o 00:03:34.091 CXX test/cpp_headers/scheduler.o 00:03:34.091 CXX test/cpp_headers/scsi.o 00:03:34.091 CXX test/cpp_headers/scsi_spec.o 00:03:34.091 CXX test/cpp_headers/sock.o 00:03:34.091 CXX test/cpp_headers/stdinc.o 00:03:34.349 CXX test/cpp_headers/string.o 00:03:34.349 CXX test/cpp_headers/thread.o 00:03:34.349 CXX test/cpp_headers/trace.o 00:03:34.349 CXX test/cpp_headers/trace_parser.o 00:03:34.349 CXX test/cpp_headers/tree.o 00:03:34.349 CXX test/cpp_headers/ublk.o 00:03:34.349 CXX test/cpp_headers/util.o 00:03:34.349 CXX test/cpp_headers/uuid.o 00:03:34.349 CXX test/cpp_headers/version.o 00:03:34.349 CXX test/cpp_headers/vfio_user_pci.o 00:03:34.349 CXX test/cpp_headers/vfio_user_spec.o 00:03:34.349 CXX test/cpp_headers/vhost.o 00:03:34.349 LINK cuse 00:03:34.349 CXX test/cpp_headers/vmd.o 00:03:34.349 CXX test/cpp_headers/xor.o 00:03:34.608 CXX test/cpp_headers/zipf.o 00:03:34.608 LINK esnap 00:03:35.175 00:03:35.175 real 1m3.699s 00:03:35.175 user 6m32.602s 00:03:35.175 sys 1m40.707s 00:03:35.175 07:58:56 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:03:35.175 07:58:56 make -- common/autotest_common.sh@10 -- $ set +x 00:03:35.175 ************************************ 00:03:35.175 END TEST make 00:03:35.175 ************************************ 00:03:35.175 07:58:56 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:35.175 07:58:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:35.175 07:58:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:35.175 07:58:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.175 07:58:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:35.175 07:58:56 -- pm/common@44 -- $ pid=5140 00:03:35.175 07:58:56 -- pm/common@50 -- $ kill -TERM 5140 00:03:35.175 07:58:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.175 07:58:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:35.175 07:58:56 -- pm/common@44 -- $ pid=5142 00:03:35.175 07:58:56 -- pm/common@50 -- $ kill -TERM 5142 00:03:35.175 07:58:56 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:35.175 07:58:56 -- nvmf/common.sh@7 -- # uname -s 00:03:35.175 07:58:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:35.175 07:58:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:35.175 07:58:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:35.175 07:58:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:35.175 07:58:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:35.175 07:58:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:35.175 07:58:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:35.175 07:58:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:35.175 07:58:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:35.175 07:58:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:35.175 07:58:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:03:35.175 07:58:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:03:35.175 07:58:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:35.175 07:58:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:35.175 07:58:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:35.175 07:58:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:35.175 07:58:57 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:35.175 07:58:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:35.175 07:58:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:35.175 07:58:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:35.175 07:58:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.175 07:58:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.175 07:58:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.175 07:58:57 -- paths/export.sh@5 -- # export PATH 00:03:35.175 07:58:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.175 07:58:57 -- nvmf/common.sh@47 -- # : 0 00:03:35.175 07:58:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:35.175 07:58:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:35.175 07:58:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:35.175 07:58:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:35.175 07:58:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:35.175 07:58:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:35.175 07:58:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:35.175 07:58:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:35.175 07:58:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:35.175 07:58:57 -- spdk/autotest.sh@32 -- # uname -s 00:03:35.175 07:58:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:35.175 07:58:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:35.175 07:58:57 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:35.175 07:58:57 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:35.175 07:58:57 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:35.175 07:58:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:35.446 07:58:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:35.446 07:58:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:35.446 07:58:57 -- spdk/autotest.sh@48 -- # udevadm_pid=52689 00:03:35.446 07:58:57 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:35.446 07:58:57 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:35.446 07:58:57 -- pm/common@17 -- # local monitor 00:03:35.446 07:58:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.446 07:58:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.446 07:58:57 -- pm/common@25 -- # sleep 1 00:03:35.446 07:58:57 -- pm/common@21 -- # date +%s 00:03:35.446 07:58:57 -- pm/common@21 -- # date +%s 00:03:35.446 07:58:57 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1718006337 00:03:35.446 07:58:57 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1718006337 00:03:35.446 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1718006337_collect-vmstat.pm.log 00:03:35.446 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1718006337_collect-cpu-load.pm.log 00:03:36.379 07:58:58 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:36.379 07:58:58 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:36.379 07:58:58 -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:36.379 07:58:58 -- common/autotest_common.sh@10 -- # set +x 00:03:36.379 07:58:58 -- spdk/autotest.sh@59 -- # create_test_list 00:03:36.379 07:58:58 -- common/autotest_common.sh@747 -- # xtrace_disable 00:03:36.379 07:58:58 -- common/autotest_common.sh@10 -- # set +x 00:03:36.379 07:58:58 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:36.379 07:58:58 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:36.379 07:58:58 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:36.379 07:58:58 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:36.379 07:58:58 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:36.379 07:58:58 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:36.379 07:58:58 -- common/autotest_common.sh@1454 -- # uname 00:03:36.379 07:58:58 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:03:36.379 07:58:58 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:36.379 07:58:58 -- common/autotest_common.sh@1474 -- # uname 00:03:36.379 07:58:58 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:03:36.379 07:58:58 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:36.379 07:58:58 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:36.379 07:58:58 -- spdk/autotest.sh@72 -- # hash lcov 00:03:36.379 07:58:58 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:36.379 07:58:58 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:36.379 --rc lcov_branch_coverage=1 00:03:36.379 --rc lcov_function_coverage=1 00:03:36.379 --rc genhtml_branch_coverage=1 00:03:36.379 --rc genhtml_function_coverage=1 00:03:36.379 --rc genhtml_legend=1 00:03:36.379 --rc geninfo_all_blocks=1 00:03:36.379 ' 00:03:36.379 07:58:58 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:36.379 --rc lcov_branch_coverage=1 00:03:36.379 --rc lcov_function_coverage=1 00:03:36.379 --rc genhtml_branch_coverage=1 00:03:36.379 --rc genhtml_function_coverage=1 00:03:36.379 --rc genhtml_legend=1 00:03:36.379 --rc geninfo_all_blocks=1 00:03:36.379 ' 00:03:36.379 07:58:58 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:36.379 --rc lcov_branch_coverage=1 00:03:36.379 --rc lcov_function_coverage=1 00:03:36.379 --rc genhtml_branch_coverage=1 00:03:36.379 --rc genhtml_function_coverage=1 00:03:36.379 --rc genhtml_legend=1 00:03:36.379 --rc geninfo_all_blocks=1 00:03:36.379 --no-external' 00:03:36.379 07:58:58 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:36.379 --rc lcov_branch_coverage=1 00:03:36.379 --rc lcov_function_coverage=1 00:03:36.379 --rc genhtml_branch_coverage=1 00:03:36.379 --rc genhtml_function_coverage=1 00:03:36.379 --rc genhtml_legend=1 00:03:36.379 --rc geninfo_all_blocks=1 00:03:36.379 --no-external' 00:03:36.379 07:58:58 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:36.637 lcov: LCOV version 1.14 00:03:36.637 07:58:58 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:54.740 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:54.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:06.947 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:06.947 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:06.947 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:06.947 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:06.947 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:06.947 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:06.947 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:06.947 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:06.947 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:06.947 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:06.947 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:06.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:06.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:06.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:09.482 07:59:30 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:09.482 07:59:30 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:09.482 07:59:30 -- common/autotest_common.sh@10 -- # set +x 00:04:09.482 07:59:31 -- spdk/autotest.sh@91 -- # rm -f 00:04:09.482 07:59:31 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:10.047 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.048 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:10.048 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:10.048 07:59:31 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:10.048 07:59:31 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:10.048 07:59:31 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:10.048 07:59:31 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:10.048 07:59:31 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:10.048 07:59:31 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:10.048 07:59:31 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:10.048 07:59:31 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:10.048 07:59:31 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:10.048 07:59:31 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:10.048 07:59:31 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n1 00:04:10.048 07:59:31 -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:04:10.048 07:59:31 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:10.048 07:59:31 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:10.048 07:59:31 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:10.048 07:59:31 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n2 00:04:10.048 07:59:31 -- common/autotest_common.sh@1661 -- # local device=nvme1n2 00:04:10.048 07:59:31 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:10.048 07:59:31 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:10.048 07:59:31 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:10.048 07:59:31 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n3 00:04:10.048 07:59:31 -- common/autotest_common.sh@1661 -- # local device=nvme1n3 00:04:10.048 07:59:31 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:10.048 07:59:31 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:10.048 07:59:31 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:10.048 07:59:31 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:10.048 07:59:31 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:10.048 07:59:31 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:10.048 07:59:31 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:10.048 07:59:31 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:10.048 No valid GPT data, bailing 00:04:10.048 07:59:31 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:10.048 07:59:31 -- scripts/common.sh@391 -- # pt= 00:04:10.048 07:59:31 -- scripts/common.sh@392 -- # return 1 00:04:10.048 07:59:31 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:10.048 1+0 records in 00:04:10.048 1+0 records out 00:04:10.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0046418 s, 226 MB/s 00:04:10.048 07:59:31 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:10.048 07:59:31 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:10.048 07:59:31 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:10.048 07:59:31 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:10.048 07:59:31 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:10.048 No valid GPT data, bailing 00:04:10.048 07:59:31 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:10.048 07:59:31 -- scripts/common.sh@391 -- # pt= 00:04:10.048 07:59:31 -- scripts/common.sh@392 -- # return 1 00:04:10.048 07:59:31 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:10.048 1+0 records in 00:04:10.048 1+0 records out 00:04:10.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491852 s, 213 MB/s 00:04:10.048 07:59:31 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:10.048 07:59:31 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:10.048 07:59:31 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:10.048 07:59:31 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:10.048 07:59:31 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:10.306 No valid GPT data, bailing 00:04:10.306 07:59:31 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:10.306 07:59:31 -- scripts/common.sh@391 -- # pt= 00:04:10.306 07:59:31 -- scripts/common.sh@392 -- # return 1 00:04:10.306 07:59:31 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:10.306 1+0 records in 00:04:10.306 1+0 records out 00:04:10.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0044092 s, 238 MB/s 00:04:10.306 07:59:31 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:10.306 07:59:31 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:10.306 07:59:31 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:10.306 07:59:31 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:10.306 07:59:31 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:10.306 No valid GPT data, bailing 00:04:10.306 07:59:32 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:10.306 07:59:32 -- scripts/common.sh@391 -- # pt= 00:04:10.306 07:59:32 -- scripts/common.sh@392 -- # return 1 00:04:10.306 07:59:32 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:10.306 1+0 records in 00:04:10.306 1+0 records out 00:04:10.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00449389 s, 233 MB/s 00:04:10.306 07:59:32 -- spdk/autotest.sh@118 -- # sync 00:04:10.306 07:59:32 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:10.306 07:59:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:10.306 07:59:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:12.206 07:59:33 -- spdk/autotest.sh@124 -- # uname -s 00:04:12.206 07:59:33 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:12.206 07:59:33 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:12.206 07:59:33 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:12.206 07:59:33 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:12.206 07:59:33 -- common/autotest_common.sh@10 -- # set +x 00:04:12.206 ************************************ 00:04:12.207 START TEST setup.sh 00:04:12.207 ************************************ 00:04:12.207 07:59:33 setup.sh -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:12.207 * Looking for test storage... 00:04:12.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:12.207 07:59:34 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:12.207 07:59:34 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:12.207 07:59:34 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:12.207 07:59:34 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:12.207 07:59:34 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:12.207 07:59:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:12.207 ************************************ 00:04:12.207 START TEST acl 00:04:12.207 ************************************ 00:04:12.207 07:59:34 setup.sh.acl -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:12.465 * Looking for test storage... 00:04:12.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:12.465 07:59:34 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n1 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n2 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme1n2 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n3 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme1n3 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:12.465 07:59:34 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:12.465 07:59:34 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:12.465 07:59:34 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:12.465 07:59:34 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:12.465 07:59:34 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:12.465 07:59:34 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:12.465 07:59:34 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:12.465 07:59:34 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:13.032 07:59:34 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:13.032 07:59:34 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:13.032 07:59:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.032 07:59:34 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:13.032 07:59:34 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.032 07:59:34 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.968 Hugepages 00:04:13.968 node hugesize free / total 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.968 00:04:13.968 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:13.968 07:59:35 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:13.968 07:59:35 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:13.968 07:59:35 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:13.968 07:59:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:13.968 ************************************ 00:04:13.968 START TEST denied 00:04:13.968 ************************************ 00:04:13.968 07:59:35 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:04:13.968 07:59:35 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:13.968 07:59:35 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:13.968 07:59:35 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.968 07:59:35 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:13.968 07:59:35 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:14.905 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:14.905 07:59:36 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:14.905 07:59:36 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:14.905 07:59:36 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:14.905 07:59:36 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:14.905 07:59:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:14.905 07:59:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:14.905 07:59:36 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:14.905 07:59:36 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:14.905 07:59:36 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.905 07:59:36 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.473 00:04:15.473 real 0m1.530s 00:04:15.473 user 0m0.636s 00:04:15.473 sys 0m0.833s 00:04:15.473 07:59:37 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:15.473 07:59:37 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:15.473 ************************************ 00:04:15.473 END TEST denied 00:04:15.473 ************************************ 00:04:15.732 07:59:37 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:15.732 07:59:37 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:15.732 07:59:37 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:15.732 07:59:37 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:15.732 ************************************ 00:04:15.732 START TEST allowed 00:04:15.732 ************************************ 00:04:15.732 07:59:37 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:04:15.732 07:59:37 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:15.732 07:59:37 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:15.732 07:59:37 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.732 07:59:37 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:15.732 07:59:37 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:16.668 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.668 07:59:38 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:16.668 07:59:38 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:16.668 07:59:38 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:16.668 07:59:38 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:16.668 07:59:38 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:16.668 07:59:38 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:16.668 07:59:38 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:16.668 07:59:38 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:16.668 07:59:38 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.668 07:59:38 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:17.235 00:04:17.235 real 0m1.602s 00:04:17.235 user 0m0.690s 00:04:17.235 sys 0m0.901s 00:04:17.235 07:59:38 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:17.235 ************************************ 00:04:17.235 07:59:38 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:17.235 END TEST allowed 00:04:17.235 ************************************ 00:04:17.235 00:04:17.235 real 0m4.961s 00:04:17.235 user 0m2.153s 00:04:17.235 sys 0m2.742s 00:04:17.235 07:59:39 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:17.235 ************************************ 00:04:17.235 END TEST acl 00:04:17.235 07:59:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:17.235 ************************************ 00:04:17.235 07:59:39 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:17.235 07:59:39 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:17.235 07:59:39 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:17.235 07:59:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:17.235 ************************************ 00:04:17.235 START TEST hugepages 00:04:17.235 ************************************ 00:04:17.235 07:59:39 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:17.496 * Looking for test storage... 00:04:17.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6024536 kB' 'MemAvailable: 7403092 kB' 'Buffers: 2436 kB' 'Cached: 1592916 kB' 'SwapCached: 0 kB' 'Active: 435560 kB' 'Inactive: 1264020 kB' 'Active(anon): 114716 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264020 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 106008 kB' 'Mapped: 48632 kB' 'Shmem: 10488 kB' 'KReclaimable: 61260 kB' 'Slab: 132612 kB' 'SReclaimable: 61260 kB' 'SUnreclaim: 71352 kB' 'KernelStack: 6476 kB' 'PageTables: 4552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 334400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.496 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.497 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:17.498 07:59:39 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:17.498 07:59:39 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:17.498 07:59:39 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:17.498 07:59:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:17.498 ************************************ 00:04:17.498 START TEST default_setup 00:04:17.498 ************************************ 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.498 07:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:18.066 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.328 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.328 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127980 kB' 'MemAvailable: 9506384 kB' 'Buffers: 2436 kB' 'Cached: 1592908 kB' 'SwapCached: 0 kB' 'Active: 452136 kB' 'Inactive: 1264028 kB' 'Active(anon): 131292 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122676 kB' 'Mapped: 48820 kB' 'Shmem: 10464 kB' 'KReclaimable: 60944 kB' 'Slab: 132204 kB' 'SReclaimable: 60944 kB' 'SUnreclaim: 71260 kB' 'KernelStack: 6352 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.328 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127480 kB' 'MemAvailable: 9505884 kB' 'Buffers: 2436 kB' 'Cached: 1592908 kB' 'SwapCached: 0 kB' 'Active: 452136 kB' 'Inactive: 1264028 kB' 'Active(anon): 131292 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122416 kB' 'Mapped: 48820 kB' 'Shmem: 10464 kB' 'KReclaimable: 60944 kB' 'Slab: 132204 kB' 'SReclaimable: 60944 kB' 'SUnreclaim: 71260 kB' 'KernelStack: 6352 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.329 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.330 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127480 kB' 'MemAvailable: 9505884 kB' 'Buffers: 2436 kB' 'Cached: 1592908 kB' 'SwapCached: 0 kB' 'Active: 452220 kB' 'Inactive: 1264028 kB' 'Active(anon): 131376 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122240 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 60944 kB' 'Slab: 132200 kB' 'SReclaimable: 60944 kB' 'SUnreclaim: 71256 kB' 'KernelStack: 6368 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.331 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.332 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.593 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:18.594 nr_hugepages=1024 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:18.594 resv_hugepages=0 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:18.594 surplus_hugepages=0 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:18.594 anon_hugepages=0 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127228 kB' 'MemAvailable: 9505632 kB' 'Buffers: 2436 kB' 'Cached: 1592908 kB' 'SwapCached: 0 kB' 'Active: 451964 kB' 'Inactive: 1264028 kB' 'Active(anon): 131120 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122256 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 60944 kB' 'Slab: 132196 kB' 'SReclaimable: 60944 kB' 'SUnreclaim: 71252 kB' 'KernelStack: 6336 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.594 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.595 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127228 kB' 'MemUsed: 4114748 kB' 'SwapCached: 0 kB' 'Active: 451732 kB' 'Inactive: 1264028 kB' 'Active(anon): 130888 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1595344 kB' 'Mapped: 48692 kB' 'AnonPages: 122056 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60944 kB' 'Slab: 132196 kB' 'SReclaimable: 60944 kB' 'SUnreclaim: 71252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.596 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:18.597 node0=1024 expecting 1024 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:18.597 00:04:18.597 real 0m1.080s 00:04:18.597 user 0m0.499s 00:04:18.597 sys 0m0.532s 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:18.597 07:59:40 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:18.597 ************************************ 00:04:18.597 END TEST default_setup 00:04:18.597 ************************************ 00:04:18.597 07:59:40 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:18.597 07:59:40 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:18.597 07:59:40 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:18.597 07:59:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:18.597 ************************************ 00:04:18.597 START TEST per_node_1G_alloc 00:04:18.597 ************************************ 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.597 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:18.857 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.857 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:18.857 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:18.857 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:18.857 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:18.857 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:18.857 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:18.857 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:18.857 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:18.857 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:18.857 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:18.857 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.857 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:18.857 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.857 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:18.857 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.857 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.857 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.857 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.857 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.857 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.857 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.121 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.121 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9177664 kB' 'MemAvailable: 10556080 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 452456 kB' 'Inactive: 1264040 kB' 'Active(anon): 131612 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122780 kB' 'Mapped: 48876 kB' 'Shmem: 10464 kB' 'KReclaimable: 60944 kB' 'Slab: 132176 kB' 'SReclaimable: 60944 kB' 'SUnreclaim: 71232 kB' 'KernelStack: 6356 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.122 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9177664 kB' 'MemAvailable: 10556076 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 452052 kB' 'Inactive: 1264040 kB' 'Active(anon): 131208 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122328 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 132172 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71236 kB' 'KernelStack: 6336 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.123 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.124 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9177916 kB' 'MemAvailable: 10556328 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 451756 kB' 'Inactive: 1264040 kB' 'Active(anon): 130912 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122020 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 132172 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71236 kB' 'KernelStack: 6320 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.125 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.126 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:19.127 nr_hugepages=512 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:19.127 resv_hugepages=0 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.127 surplus_hugepages=0 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.127 anon_hugepages=0 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9178436 kB' 'MemAvailable: 10556848 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 452016 kB' 'Inactive: 1264040 kB' 'Active(anon): 131172 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122280 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 132172 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71236 kB' 'KernelStack: 6320 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.127 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.128 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9178500 kB' 'MemUsed: 3063476 kB' 'SwapCached: 0 kB' 'Active: 451992 kB' 'Inactive: 1264040 kB' 'Active(anon): 131148 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1595348 kB' 'Mapped: 48696 kB' 'AnonPages: 122256 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60936 kB' 'Slab: 132176 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71240 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.129 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.130 node0=512 expecting 512 00:04:19.130 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:19.131 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:19.131 00:04:19.131 real 0m0.545s 00:04:19.131 user 0m0.291s 00:04:19.131 sys 0m0.288s 00:04:19.131 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:19.131 07:59:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:19.131 ************************************ 00:04:19.131 END TEST per_node_1G_alloc 00:04:19.131 ************************************ 00:04:19.131 07:59:40 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:19.131 07:59:40 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:19.131 07:59:40 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:19.131 07:59:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:19.131 ************************************ 00:04:19.131 START TEST even_2G_alloc 00:04:19.131 ************************************ 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.131 07:59:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:19.390 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.657 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:19.657 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8133556 kB' 'MemAvailable: 9511968 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 452228 kB' 'Inactive: 1264040 kB' 'Active(anon): 131384 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122504 kB' 'Mapped: 48788 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 132220 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71284 kB' 'KernelStack: 6368 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.657 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8133556 kB' 'MemAvailable: 9511968 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 451888 kB' 'Inactive: 1264040 kB' 'Active(anon): 131044 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122416 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 132212 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71276 kB' 'KernelStack: 6352 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.658 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8133556 kB' 'MemAvailable: 9511968 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 451768 kB' 'Inactive: 1264040 kB' 'Active(anon): 130924 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122324 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 132192 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71256 kB' 'KernelStack: 6336 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.659 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.660 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:19.661 nr_hugepages=1024 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:19.661 resv_hugepages=0 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.661 surplus_hugepages=0 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.661 anon_hugepages=0 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8133816 kB' 'MemAvailable: 9512228 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 451816 kB' 'Inactive: 1264040 kB' 'Active(anon): 130972 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122372 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 132192 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71256 kB' 'KernelStack: 6368 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.661 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8133816 kB' 'MemUsed: 4108160 kB' 'SwapCached: 0 kB' 'Active: 451828 kB' 'Inactive: 1264040 kB' 'Active(anon): 130984 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1595348 kB' 'Mapped: 48696 kB' 'AnonPages: 122092 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60936 kB' 'Slab: 132176 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71240 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.662 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.663 node0=1024 expecting 1024 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:19.663 00:04:19.663 real 0m0.554s 00:04:19.663 user 0m0.231s 00:04:19.663 sys 0m0.326s 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:19.663 07:59:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:19.663 ************************************ 00:04:19.663 END TEST even_2G_alloc 00:04:19.663 ************************************ 00:04:19.663 07:59:41 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:19.663 07:59:41 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:19.663 07:59:41 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:19.663 07:59:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:19.921 ************************************ 00:04:19.921 START TEST odd_alloc 00:04:19.921 ************************************ 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.921 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:20.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:20.190 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:20.190 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:20.190 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:20.190 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:20.190 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:20.190 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:20.190 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:20.190 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:20.190 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:20.190 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:20.190 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:20.190 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:20.190 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:20.190 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.190 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.190 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.190 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.190 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127768 kB' 'MemAvailable: 9506180 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 452312 kB' 'Inactive: 1264040 kB' 'Active(anon): 131468 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122588 kB' 'Mapped: 48836 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 132156 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71220 kB' 'KernelStack: 6392 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127768 kB' 'MemAvailable: 9506180 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 452248 kB' 'Inactive: 1264040 kB' 'Active(anon): 131404 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122300 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 132192 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71256 kB' 'KernelStack: 6352 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127768 kB' 'MemAvailable: 9506180 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 451840 kB' 'Inactive: 1264040 kB' 'Active(anon): 130996 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122360 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 132192 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71256 kB' 'KernelStack: 6336 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.196 nr_hugepages=1025 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:20.196 resv_hugepages=0 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:20.196 surplus_hugepages=0 00:04:20.196 anon_hugepages=0 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127768 kB' 'MemAvailable: 9506180 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 451944 kB' 'Inactive: 1264040 kB' 'Active(anon): 131100 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122248 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 132192 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71256 kB' 'KernelStack: 6352 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.471 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.471 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.471 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:20.473 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.473 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:20.473 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:20.473 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:20.473 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.473 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:20.473 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8128436 kB' 'MemUsed: 4113540 kB' 'SwapCached: 0 kB' 'Active: 451976 kB' 'Inactive: 1264040 kB' 'Active(anon): 131132 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1595348 kB' 'Mapped: 48692 kB' 'AnonPages: 122340 kB' 'Shmem: 10464 kB' 'KernelStack: 6384 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60936 kB' 'Slab: 132184 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.474 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.476 node0=1025 expecting 1025 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:20.476 00:04:20.476 real 0m0.572s 00:04:20.476 user 0m0.275s 00:04:20.476 sys 0m0.297s 00:04:20.476 ************************************ 00:04:20.476 END TEST odd_alloc 00:04:20.476 ************************************ 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:20.476 07:59:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:20.476 07:59:42 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:20.476 07:59:42 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:20.476 07:59:42 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:20.476 07:59:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:20.476 ************************************ 00:04:20.476 START TEST custom_alloc 00:04:20.476 ************************************ 00:04:20.476 07:59:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:04:20.476 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:20.476 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:20.476 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:20.476 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:20.476 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:20.476 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:20.476 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:20.476 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:20.476 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.476 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:20.476 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:20.476 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:20.477 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.478 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.478 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.478 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:20.478 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:20.478 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:20.478 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:20.478 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:20.478 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:20.478 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.478 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:20.742 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:20.742 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:20.742 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9179992 kB' 'MemAvailable: 10558404 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 452140 kB' 'Inactive: 1264040 kB' 'Active(anon): 131296 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122448 kB' 'Mapped: 48796 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 132172 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71236 kB' 'KernelStack: 6356 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 350928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.742 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.743 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9179992 kB' 'MemAvailable: 10558404 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 451756 kB' 'Inactive: 1264040 kB' 'Active(anon): 130912 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122336 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 132176 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71240 kB' 'KernelStack: 6336 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.744 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.007 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9179740 kB' 'MemAvailable: 10558152 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 451780 kB' 'Inactive: 1264040 kB' 'Active(anon): 130936 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122148 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 132176 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71240 kB' 'KernelStack: 6352 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.008 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.009 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:21.010 nr_hugepages=512 00:04:21.010 resv_hugepages=0 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.010 surplus_hugepages=0 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.010 anon_hugepages=0 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9179740 kB' 'MemAvailable: 10558152 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 451704 kB' 'Inactive: 1264040 kB' 'Active(anon): 130860 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122332 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 132156 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71220 kB' 'KernelStack: 6320 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.010 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.011 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9179488 kB' 'MemUsed: 3062488 kB' 'SwapCached: 0 kB' 'Active: 452188 kB' 'Inactive: 1264040 kB' 'Active(anon): 131344 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1595348 kB' 'Mapped: 48692 kB' 'AnonPages: 122500 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60936 kB' 'Slab: 132156 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:21.014 node0=512 expecting 512 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:21.014 00:04:21.014 real 0m0.562s 00:04:21.014 user 0m0.236s 00:04:21.014 sys 0m0.325s 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:21.014 07:59:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:21.014 ************************************ 00:04:21.014 END TEST custom_alloc 00:04:21.014 ************************************ 00:04:21.014 07:59:42 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:21.014 07:59:42 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:21.014 07:59:42 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:21.014 07:59:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:21.014 ************************************ 00:04:21.014 START TEST no_shrink_alloc 00:04:21.014 ************************************ 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.014 07:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:21.273 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.273 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:21.273 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:21.536 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:21.536 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8126852 kB' 'MemAvailable: 9505264 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 452504 kB' 'Inactive: 1264040 kB' 'Active(anon): 131660 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123016 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 132256 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71320 kB' 'KernelStack: 6292 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8126852 kB' 'MemAvailable: 9505264 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 451828 kB' 'Inactive: 1264040 kB' 'Active(anon): 130984 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122376 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 132276 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71340 kB' 'KernelStack: 6336 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8126852 kB' 'MemAvailable: 9505264 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 451828 kB' 'Inactive: 1264040 kB' 'Active(anon): 130984 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122156 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 132260 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71324 kB' 'KernelStack: 6352 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:21.542 nr_hugepages=1024 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:21.542 resv_hugepages=0 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.542 surplus_hugepages=0 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.542 anon_hugepages=0 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.542 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8126852 kB' 'MemAvailable: 9505264 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 452080 kB' 'Inactive: 1264040 kB' 'Active(anon): 131236 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122408 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 132260 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71324 kB' 'KernelStack: 6352 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8126852 kB' 'MemUsed: 4115124 kB' 'SwapCached: 0 kB' 'Active: 452128 kB' 'Inactive: 1264040 kB' 'Active(anon): 131284 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1595348 kB' 'Mapped: 48692 kB' 'AnonPages: 122428 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60936 kB' 'Slab: 132260 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 71324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.544 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.545 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.546 node0=1024 expecting 1024 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.546 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:21.804 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.804 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:21.804 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:22.068 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8128236 kB' 'MemAvailable: 9506644 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 448444 kB' 'Inactive: 1264040 kB' 'Active(anon): 127600 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118968 kB' 'Mapped: 48060 kB' 'Shmem: 10464 kB' 'KReclaimable: 60924 kB' 'Slab: 132088 kB' 'SReclaimable: 60924 kB' 'SUnreclaim: 71164 kB' 'KernelStack: 6280 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.068 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8128236 kB' 'MemAvailable: 9506644 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 447884 kB' 'Inactive: 1264040 kB' 'Active(anon): 127040 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118200 kB' 'Mapped: 47932 kB' 'Shmem: 10464 kB' 'KReclaimable: 60924 kB' 'Slab: 132080 kB' 'SReclaimable: 60924 kB' 'SUnreclaim: 71156 kB' 'KernelStack: 6308 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.070 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8128236 kB' 'MemAvailable: 9506644 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 447884 kB' 'Inactive: 1264040 kB' 'Active(anon): 127040 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118200 kB' 'Mapped: 47932 kB' 'Shmem: 10464 kB' 'KReclaimable: 60924 kB' 'Slab: 132080 kB' 'SReclaimable: 60924 kB' 'SUnreclaim: 71156 kB' 'KernelStack: 6308 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:22.074 nr_hugepages=1024 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:22.074 resv_hugepages=0 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.074 surplus_hugepages=0 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.074 anon_hugepages=0 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.074 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8128496 kB' 'MemAvailable: 9506904 kB' 'Buffers: 2436 kB' 'Cached: 1592912 kB' 'SwapCached: 0 kB' 'Active: 447848 kB' 'Inactive: 1264040 kB' 'Active(anon): 127004 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118424 kB' 'Mapped: 47932 kB' 'Shmem: 10464 kB' 'KReclaimable: 60924 kB' 'Slab: 132080 kB' 'SReclaimable: 60924 kB' 'SUnreclaim: 71156 kB' 'KernelStack: 6224 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 7172096 kB' 'DirectMap1G: 7340032 kB' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.075 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.076 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8128496 kB' 'MemUsed: 4113480 kB' 'SwapCached: 0 kB' 'Active: 447844 kB' 'Inactive: 1264040 kB' 'Active(anon): 127000 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1264040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1595348 kB' 'Mapped: 47932 kB' 'AnonPages: 118148 kB' 'Shmem: 10464 kB' 'KernelStack: 6224 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60924 kB' 'Slab: 132080 kB' 'SReclaimable: 60924 kB' 'SUnreclaim: 71156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.077 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:22.078 node0=1024 expecting 1024 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:22.078 00:04:22.078 real 0m1.058s 00:04:22.078 user 0m0.526s 00:04:22.078 sys 0m0.597s 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:22.078 07:59:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:22.078 ************************************ 00:04:22.078 END TEST no_shrink_alloc 00:04:22.078 ************************************ 00:04:22.078 07:59:43 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:22.078 07:59:43 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:22.078 07:59:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:22.078 07:59:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:22.078 07:59:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:22.078 07:59:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:22.078 07:59:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:22.078 07:59:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:22.078 07:59:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:22.078 00:04:22.078 real 0m4.817s 00:04:22.078 user 0m2.206s 00:04:22.078 sys 0m2.640s 00:04:22.078 07:59:43 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:22.078 ************************************ 00:04:22.078 07:59:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:22.078 END TEST hugepages 00:04:22.078 ************************************ 00:04:22.078 07:59:43 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:22.078 07:59:43 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:22.078 07:59:43 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:22.078 07:59:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:22.078 ************************************ 00:04:22.078 START TEST driver 00:04:22.078 ************************************ 00:04:22.078 07:59:43 setup.sh.driver -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:22.338 * Looking for test storage... 00:04:22.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:22.338 07:59:44 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:22.338 07:59:44 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.338 07:59:44 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:22.905 07:59:44 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:22.905 07:59:44 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:22.905 07:59:44 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:22.905 07:59:44 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:22.905 ************************************ 00:04:22.905 START TEST guess_driver 00:04:22.905 ************************************ 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:22.905 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:22.905 Looking for driver=uio_pci_generic 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.905 07:59:44 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:23.472 07:59:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:23.472 07:59:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:23.472 07:59:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.472 07:59:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.472 07:59:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:23.472 07:59:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.730 07:59:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.730 07:59:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:23.730 07:59:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.730 07:59:45 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:23.730 07:59:45 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:23.730 07:59:45 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:23.730 07:59:45 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:24.297 00:04:24.297 real 0m1.450s 00:04:24.297 user 0m0.539s 00:04:24.297 sys 0m0.910s 00:04:24.297 07:59:46 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:24.297 07:59:46 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:24.297 ************************************ 00:04:24.297 END TEST guess_driver 00:04:24.297 ************************************ 00:04:24.297 00:04:24.297 real 0m2.134s 00:04:24.297 user 0m0.778s 00:04:24.297 sys 0m1.413s 00:04:24.298 07:59:46 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:24.298 07:59:46 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:24.298 ************************************ 00:04:24.298 END TEST driver 00:04:24.298 ************************************ 00:04:24.298 07:59:46 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:24.298 07:59:46 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:24.298 07:59:46 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:24.298 07:59:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:24.298 ************************************ 00:04:24.298 START TEST devices 00:04:24.298 ************************************ 00:04:24.298 07:59:46 setup.sh.devices -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:24.556 * Looking for test storage... 00:04:24.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:24.556 07:59:46 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:24.556 07:59:46 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:24.556 07:59:46 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.556 07:59:46 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:25.124 07:59:46 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n2 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n2 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n3 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n3 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n1 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:25.124 07:59:46 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:25.124 07:59:46 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:25.124 07:59:46 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:25.124 07:59:46 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:25.124 07:59:46 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:25.124 07:59:46 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:25.124 07:59:46 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:25.124 07:59:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:25.124 07:59:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:25.124 07:59:46 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:25.124 07:59:46 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:25.124 07:59:46 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:25.124 07:59:46 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:25.124 07:59:46 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:25.124 No valid GPT data, bailing 00:04:25.124 07:59:46 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:25.124 07:59:46 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:25.124 07:59:46 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:25.383 07:59:46 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:25.383 07:59:46 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:25.383 07:59:46 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:25.383 07:59:46 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:25.383 07:59:46 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:25.383 07:59:46 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:25.383 07:59:46 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:25.383 07:59:46 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:25.383 07:59:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:25.383 07:59:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:25.383 07:59:46 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:25.383 07:59:46 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:25.383 07:59:46 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:25.383 07:59:46 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:25.383 07:59:46 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:25.383 No valid GPT data, bailing 00:04:25.383 07:59:47 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:25.383 07:59:47 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:25.383 07:59:47 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:25.383 07:59:47 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:25.383 07:59:47 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:25.383 07:59:47 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:25.383 07:59:47 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:25.383 07:59:47 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:25.383 No valid GPT data, bailing 00:04:25.383 07:59:47 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:25.383 07:59:47 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:25.383 07:59:47 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:25.383 07:59:47 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:25.383 07:59:47 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:25.383 07:59:47 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:25.383 07:59:47 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:25.383 07:59:47 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:25.383 No valid GPT data, bailing 00:04:25.383 07:59:47 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:25.383 07:59:47 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:25.383 07:59:47 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:25.383 07:59:47 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:25.383 07:59:47 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:25.383 07:59:47 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:25.383 07:59:47 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:25.383 07:59:47 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:25.383 07:59:47 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:25.383 07:59:47 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:25.383 ************************************ 00:04:25.383 START TEST nvme_mount 00:04:25.383 ************************************ 00:04:25.383 07:59:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:04:25.383 07:59:47 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:25.383 07:59:47 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:25.383 07:59:47 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.383 07:59:47 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:25.383 07:59:47 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:25.383 07:59:47 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:25.383 07:59:47 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:25.383 07:59:47 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:25.383 07:59:47 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:25.383 07:59:47 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:25.383 07:59:47 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:25.383 07:59:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:25.383 07:59:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.383 07:59:47 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:25.383 07:59:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:25.383 07:59:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.383 07:59:47 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:25.383 07:59:47 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:25.383 07:59:47 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:26.757 Creating new GPT entries in memory. 00:04:26.757 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:26.757 other utilities. 00:04:26.757 07:59:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:26.757 07:59:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.757 07:59:48 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:26.757 07:59:48 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:26.757 07:59:48 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:27.692 Creating new GPT entries in memory. 00:04:27.692 The operation has completed successfully. 00:04:27.692 07:59:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:27.692 07:59:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.692 07:59:49 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 56900 00:04:27.692 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.692 07:59:49 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:27.692 07:59:49 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.692 07:59:49 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:27.692 07:59:49 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:27.692 07:59:49 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.692 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:27.692 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:27.692 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:27.692 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.692 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:27.692 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:27.693 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.693 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:27.693 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:27.693 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.693 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:27.693 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:27.693 07:59:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.693 07:59:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:27.693 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.693 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:27.693 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:27.693 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.693 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.693 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.950 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.950 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.950 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.950 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.209 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.209 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:28.209 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.209 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.209 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:28.209 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:28.209 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.209 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.209 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.209 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:28.209 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:28.209 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:28.209 07:59:49 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:28.468 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:28.468 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:28.468 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:28.468 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:28.468 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:28.468 07:59:50 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:28.468 07:59:50 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.468 07:59:50 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:28.468 07:59:50 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:28.468 07:59:50 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.468 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:28.468 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:28.468 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:28.468 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.468 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:28.468 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:28.468 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.468 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:28.468 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:28.468 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.468 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:28.468 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:28.468 07:59:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.468 07:59:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:28.726 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.726 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:28.726 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:28.726 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.726 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.726 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.726 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.726 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.985 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.985 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.985 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.985 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:28.985 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.985 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.985 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:28.985 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.985 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:28.985 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:28.985 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:28.985 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:28.985 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:28.985 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:28.985 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:28.985 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:28.985 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.985 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:28.985 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:28.985 07:59:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.985 07:59:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:29.244 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:29.244 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:29.244 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:29.244 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.244 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:29.244 07:59:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.503 07:59:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:29.503 07:59:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.503 07:59:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:29.503 07:59:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.503 07:59:51 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.503 07:59:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:29.503 07:59:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:29.503 07:59:51 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:29.503 07:59:51 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.503 07:59:51 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:29.503 07:59:51 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:29.503 07:59:51 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:29.503 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:29.503 00:04:29.503 real 0m4.057s 00:04:29.503 user 0m0.700s 00:04:29.503 sys 0m1.101s 00:04:29.503 07:59:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:29.503 07:59:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:29.503 ************************************ 00:04:29.503 END TEST nvme_mount 00:04:29.503 ************************************ 00:04:29.503 07:59:51 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:29.503 07:59:51 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:29.503 07:59:51 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:29.503 07:59:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:29.503 ************************************ 00:04:29.503 START TEST dm_mount 00:04:29.503 ************************************ 00:04:29.503 07:59:51 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:04:29.503 07:59:51 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:29.503 07:59:51 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:29.503 07:59:51 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:29.503 07:59:51 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:29.503 07:59:51 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:29.503 07:59:51 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:29.503 07:59:51 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:29.503 07:59:51 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:29.503 07:59:51 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:29.503 07:59:51 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:29.503 07:59:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:29.503 07:59:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.503 07:59:51 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:29.503 07:59:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:29.503 07:59:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.503 07:59:51 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:29.503 07:59:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:29.504 07:59:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.504 07:59:51 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:29.504 07:59:51 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:29.504 07:59:51 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:30.881 Creating new GPT entries in memory. 00:04:30.881 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:30.881 other utilities. 00:04:30.881 07:59:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:30.881 07:59:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.881 07:59:52 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:30.881 07:59:52 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:30.881 07:59:52 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:31.828 Creating new GPT entries in memory. 00:04:31.828 The operation has completed successfully. 00:04:31.828 07:59:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:31.828 07:59:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.828 07:59:53 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:31.828 07:59:53 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:31.828 07:59:53 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:32.765 The operation has completed successfully. 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57333 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.765 07:59:54 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:33.024 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:33.025 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:33.025 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:33.025 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.025 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:33.025 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.025 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:33.025 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.284 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:33.284 07:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.284 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.284 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:33.284 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:33.284 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:33.284 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:33.284 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:33.284 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:33.284 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:33.284 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:33.284 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:33.284 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:33.284 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:33.284 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:33.284 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:33.284 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:33.284 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.284 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:33.284 07:59:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.284 07:59:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:33.543 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:33.543 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:33.543 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:33.543 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.543 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:33.543 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.543 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:33.543 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.802 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:33.802 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.802 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.802 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:33.802 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:33.802 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:33.802 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:33.802 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:33.802 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:33.802 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.802 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:33.802 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:33.802 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:33.802 07:59:55 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:33.802 00:04:33.802 real 0m4.241s 00:04:33.802 user 0m0.403s 00:04:33.802 sys 0m0.801s 00:04:33.802 07:59:55 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:33.802 07:59:55 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:33.802 ************************************ 00:04:33.802 END TEST dm_mount 00:04:33.802 ************************************ 00:04:33.802 07:59:55 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:33.802 07:59:55 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:33.802 07:59:55 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.802 07:59:55 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.802 07:59:55 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:33.802 07:59:55 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:33.802 07:59:55 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:34.061 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:34.061 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:34.061 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:34.061 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:34.061 07:59:55 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:34.061 07:59:55 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:34.061 07:59:55 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:34.061 07:59:55 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:34.061 07:59:55 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:34.061 07:59:55 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:34.061 07:59:55 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:34.061 00:04:34.061 real 0m9.816s 00:04:34.061 user 0m1.724s 00:04:34.061 sys 0m2.510s 00:04:34.061 07:59:55 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:34.061 07:59:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:34.061 ************************************ 00:04:34.061 END TEST devices 00:04:34.061 ************************************ 00:04:34.319 00:04:34.319 real 0m22.016s 00:04:34.319 user 0m6.967s 00:04:34.319 sys 0m9.483s 00:04:34.319 07:59:55 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:34.319 07:59:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:34.319 ************************************ 00:04:34.319 END TEST setup.sh 00:04:34.319 ************************************ 00:04:34.319 07:59:55 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:34.886 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.886 Hugepages 00:04:34.886 node hugesize free / total 00:04:34.886 node0 1048576kB 0 / 0 00:04:34.886 node0 2048kB 2048 / 2048 00:04:34.886 00:04:34.886 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:34.886 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:35.145 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:35.145 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:35.145 07:59:56 -- spdk/autotest.sh@130 -- # uname -s 00:04:35.145 07:59:56 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:35.145 07:59:56 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:35.145 07:59:56 -- common/autotest_common.sh@1530 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:35.712 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.970 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.970 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.970 07:59:57 -- common/autotest_common.sh@1531 -- # sleep 1 00:04:36.905 07:59:58 -- common/autotest_common.sh@1532 -- # bdfs=() 00:04:36.905 07:59:58 -- common/autotest_common.sh@1532 -- # local bdfs 00:04:36.905 07:59:58 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:04:36.905 07:59:58 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:04:36.906 07:59:58 -- common/autotest_common.sh@1512 -- # bdfs=() 00:04:36.906 07:59:58 -- common/autotest_common.sh@1512 -- # local bdfs 00:04:36.906 07:59:58 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:36.906 07:59:58 -- common/autotest_common.sh@1513 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:36.906 07:59:58 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:04:37.163 07:59:58 -- common/autotest_common.sh@1514 -- # (( 2 == 0 )) 00:04:37.163 07:59:58 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:37.163 07:59:58 -- common/autotest_common.sh@1535 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.422 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:37.422 Waiting for block devices as requested 00:04:37.422 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:37.680 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:37.680 07:59:59 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:04:37.680 07:59:59 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:37.680 07:59:59 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:37.680 07:59:59 -- common/autotest_common.sh@1501 -- # grep 0000:00:10.0/nvme/nvme 00:04:37.680 07:59:59 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:37.680 07:59:59 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:37.680 07:59:59 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:37.680 07:59:59 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme1 00:04:37.680 07:59:59 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme1 00:04:37.680 07:59:59 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme1 ]] 00:04:37.680 07:59:59 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme1 00:04:37.680 07:59:59 -- common/autotest_common.sh@1544 -- # grep oacs 00:04:37.680 07:59:59 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:04:37.680 07:59:59 -- common/autotest_common.sh@1544 -- # oacs=' 0x12a' 00:04:37.680 07:59:59 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:04:37.680 07:59:59 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:04:37.680 07:59:59 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme1 00:04:37.680 07:59:59 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:04:37.680 07:59:59 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:04:37.680 07:59:59 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:04:37.680 07:59:59 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:04:37.680 07:59:59 -- common/autotest_common.sh@1556 -- # continue 00:04:37.680 07:59:59 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:04:37.680 07:59:59 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:37.680 07:59:59 -- common/autotest_common.sh@1501 -- # grep 0000:00:11.0/nvme/nvme 00:04:37.680 07:59:59 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:37.680 07:59:59 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:37.680 07:59:59 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:37.680 07:59:59 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:37.680 07:59:59 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:04:37.680 07:59:59 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:04:37.680 07:59:59 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:04:37.680 07:59:59 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:04:37.680 07:59:59 -- common/autotest_common.sh@1544 -- # grep oacs 00:04:37.680 07:59:59 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:04:37.680 07:59:59 -- common/autotest_common.sh@1544 -- # oacs=' 0x12a' 00:04:37.680 07:59:59 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:04:37.680 07:59:59 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:04:37.680 07:59:59 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:04:37.680 07:59:59 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:04:37.680 07:59:59 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:04:37.680 07:59:59 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:04:37.680 07:59:59 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:04:37.680 07:59:59 -- common/autotest_common.sh@1556 -- # continue 00:04:37.680 07:59:59 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:37.680 07:59:59 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:37.680 07:59:59 -- common/autotest_common.sh@10 -- # set +x 00:04:37.680 07:59:59 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:37.680 07:59:59 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:37.680 07:59:59 -- common/autotest_common.sh@10 -- # set +x 00:04:37.680 07:59:59 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:38.614 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:38.614 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:38.614 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:38.614 08:00:00 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:38.614 08:00:00 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:38.614 08:00:00 -- common/autotest_common.sh@10 -- # set +x 00:04:38.614 08:00:00 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:38.614 08:00:00 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:04:38.615 08:00:00 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:04:38.615 08:00:00 -- common/autotest_common.sh@1576 -- # bdfs=() 00:04:38.615 08:00:00 -- common/autotest_common.sh@1576 -- # local bdfs 00:04:38.615 08:00:00 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:04:38.615 08:00:00 -- common/autotest_common.sh@1512 -- # bdfs=() 00:04:38.615 08:00:00 -- common/autotest_common.sh@1512 -- # local bdfs 00:04:38.615 08:00:00 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:38.615 08:00:00 -- common/autotest_common.sh@1513 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:38.615 08:00:00 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:04:38.874 08:00:00 -- common/autotest_common.sh@1514 -- # (( 2 == 0 )) 00:04:38.874 08:00:00 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:38.874 08:00:00 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:04:38.874 08:00:00 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:38.874 08:00:00 -- common/autotest_common.sh@1579 -- # device=0x0010 00:04:38.874 08:00:00 -- common/autotest_common.sh@1580 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:38.874 08:00:00 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:04:38.874 08:00:00 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:38.874 08:00:00 -- common/autotest_common.sh@1579 -- # device=0x0010 00:04:38.874 08:00:00 -- common/autotest_common.sh@1580 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:38.874 08:00:00 -- common/autotest_common.sh@1585 -- # printf '%s\n' 00:04:38.874 08:00:00 -- common/autotest_common.sh@1591 -- # [[ -z '' ]] 00:04:38.874 08:00:00 -- common/autotest_common.sh@1592 -- # return 0 00:04:38.874 08:00:00 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:38.874 08:00:00 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:38.874 08:00:00 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:38.874 08:00:00 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:38.874 08:00:00 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:38.874 08:00:00 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:38.874 08:00:00 -- common/autotest_common.sh@10 -- # set +x 00:04:38.874 08:00:00 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:38.874 08:00:00 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:38.874 08:00:00 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:38.874 08:00:00 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:38.874 08:00:00 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:38.874 08:00:00 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:38.874 08:00:00 -- common/autotest_common.sh@10 -- # set +x 00:04:38.874 ************************************ 00:04:38.874 START TEST env 00:04:38.874 ************************************ 00:04:38.874 08:00:00 env -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:38.874 * Looking for test storage... 00:04:38.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:38.874 08:00:00 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:38.874 08:00:00 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:38.874 08:00:00 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:38.874 08:00:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.874 ************************************ 00:04:38.874 START TEST env_memory 00:04:38.874 ************************************ 00:04:38.874 08:00:00 env.env_memory -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:38.874 00:04:38.874 00:04:38.874 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.874 http://cunit.sourceforge.net/ 00:04:38.874 00:04:38.874 00:04:38.874 Suite: memory 00:04:38.874 Test: alloc and free memory map ...[2024-06-10 08:00:00.670804] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:38.874 passed 00:04:38.874 Test: mem map translation ...[2024-06-10 08:00:00.695245] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:38.874 [2024-06-10 08:00:00.695307] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:38.874 [2024-06-10 08:00:00.695352] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:38.874 [2024-06-10 08:00:00.695361] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:38.874 passed 00:04:39.131 Test: mem map registration ...[2024-06-10 08:00:00.746228] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:39.131 [2024-06-10 08:00:00.746267] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:39.131 passed 00:04:39.131 Test: mem map adjacent registrations ...passed 00:04:39.131 00:04:39.131 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.131 suites 1 1 n/a 0 0 00:04:39.131 tests 4 4 4 0 0 00:04:39.131 asserts 152 152 152 0 n/a 00:04:39.131 00:04:39.131 Elapsed time = 0.168 seconds 00:04:39.131 00:04:39.131 real 0m0.184s 00:04:39.131 user 0m0.165s 00:04:39.131 sys 0m0.015s 00:04:39.131 08:00:00 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:39.131 08:00:00 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:39.131 ************************************ 00:04:39.131 END TEST env_memory 00:04:39.131 ************************************ 00:04:39.131 08:00:00 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:39.131 08:00:00 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:39.131 08:00:00 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:39.131 08:00:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.131 ************************************ 00:04:39.131 START TEST env_vtophys 00:04:39.131 ************************************ 00:04:39.132 08:00:00 env.env_vtophys -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:39.132 EAL: lib.eal log level changed from notice to debug 00:04:39.132 EAL: Detected lcore 0 as core 0 on socket 0 00:04:39.132 EAL: Detected lcore 1 as core 0 on socket 0 00:04:39.132 EAL: Detected lcore 2 as core 0 on socket 0 00:04:39.132 EAL: Detected lcore 3 as core 0 on socket 0 00:04:39.132 EAL: Detected lcore 4 as core 0 on socket 0 00:04:39.132 EAL: Detected lcore 5 as core 0 on socket 0 00:04:39.132 EAL: Detected lcore 6 as core 0 on socket 0 00:04:39.132 EAL: Detected lcore 7 as core 0 on socket 0 00:04:39.132 EAL: Detected lcore 8 as core 0 on socket 0 00:04:39.132 EAL: Detected lcore 9 as core 0 on socket 0 00:04:39.132 EAL: Maximum logical cores by configuration: 128 00:04:39.132 EAL: Detected CPU lcores: 10 00:04:39.132 EAL: Detected NUMA nodes: 1 00:04:39.132 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:39.132 EAL: Detected shared linkage of DPDK 00:04:39.132 EAL: No shared files mode enabled, IPC will be disabled 00:04:39.132 EAL: Selected IOVA mode 'PA' 00:04:39.132 EAL: Probing VFIO support... 00:04:39.132 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:39.132 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:39.132 EAL: Ask a virtual area of 0x2e000 bytes 00:04:39.132 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:39.132 EAL: Setting up physically contiguous memory... 00:04:39.132 EAL: Setting maximum number of open files to 524288 00:04:39.132 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:39.132 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:39.132 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.132 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:39.132 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.132 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.132 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:39.132 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:39.132 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.132 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:39.132 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.132 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.132 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:39.132 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:39.132 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.132 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:39.132 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.132 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.132 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:39.132 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:39.132 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.132 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:39.132 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.132 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.132 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:39.132 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:39.132 EAL: Hugepages will be freed exactly as allocated. 00:04:39.132 EAL: No shared files mode enabled, IPC is disabled 00:04:39.132 EAL: No shared files mode enabled, IPC is disabled 00:04:39.389 EAL: TSC frequency is ~2200000 KHz 00:04:39.389 EAL: Main lcore 0 is ready (tid=7faa4a261a00;cpuset=[0]) 00:04:39.389 EAL: Trying to obtain current memory policy. 00:04:39.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.389 EAL: Restoring previous memory policy: 0 00:04:39.389 EAL: request: mp_malloc_sync 00:04:39.389 EAL: No shared files mode enabled, IPC is disabled 00:04:39.389 EAL: Heap on socket 0 was expanded by 2MB 00:04:39.389 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:39.389 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:39.389 EAL: Mem event callback 'spdk:(nil)' registered 00:04:39.389 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:39.389 00:04:39.389 00:04:39.389 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.389 http://cunit.sourceforge.net/ 00:04:39.389 00:04:39.389 00:04:39.389 Suite: components_suite 00:04:39.389 Test: vtophys_malloc_test ...passed 00:04:39.389 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:39.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.389 EAL: Restoring previous memory policy: 4 00:04:39.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.389 EAL: request: mp_malloc_sync 00:04:39.389 EAL: No shared files mode enabled, IPC is disabled 00:04:39.389 EAL: Heap on socket 0 was expanded by 4MB 00:04:39.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.389 EAL: request: mp_malloc_sync 00:04:39.389 EAL: No shared files mode enabled, IPC is disabled 00:04:39.389 EAL: Heap on socket 0 was shrunk by 4MB 00:04:39.389 EAL: Trying to obtain current memory policy. 00:04:39.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.389 EAL: Restoring previous memory policy: 4 00:04:39.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.389 EAL: request: mp_malloc_sync 00:04:39.389 EAL: No shared files mode enabled, IPC is disabled 00:04:39.389 EAL: Heap on socket 0 was expanded by 6MB 00:04:39.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.389 EAL: request: mp_malloc_sync 00:04:39.389 EAL: No shared files mode enabled, IPC is disabled 00:04:39.389 EAL: Heap on socket 0 was shrunk by 6MB 00:04:39.389 EAL: Trying to obtain current memory policy. 00:04:39.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.389 EAL: Restoring previous memory policy: 4 00:04:39.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.389 EAL: request: mp_malloc_sync 00:04:39.389 EAL: No shared files mode enabled, IPC is disabled 00:04:39.389 EAL: Heap on socket 0 was expanded by 10MB 00:04:39.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.389 EAL: request: mp_malloc_sync 00:04:39.389 EAL: No shared files mode enabled, IPC is disabled 00:04:39.389 EAL: Heap on socket 0 was shrunk by 10MB 00:04:39.389 EAL: Trying to obtain current memory policy. 00:04:39.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.389 EAL: Restoring previous memory policy: 4 00:04:39.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.389 EAL: request: mp_malloc_sync 00:04:39.389 EAL: No shared files mode enabled, IPC is disabled 00:04:39.389 EAL: Heap on socket 0 was expanded by 18MB 00:04:39.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.389 EAL: request: mp_malloc_sync 00:04:39.389 EAL: No shared files mode enabled, IPC is disabled 00:04:39.389 EAL: Heap on socket 0 was shrunk by 18MB 00:04:39.389 EAL: Trying to obtain current memory policy. 00:04:39.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.389 EAL: Restoring previous memory policy: 4 00:04:39.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.389 EAL: request: mp_malloc_sync 00:04:39.389 EAL: No shared files mode enabled, IPC is disabled 00:04:39.389 EAL: Heap on socket 0 was expanded by 34MB 00:04:39.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.389 EAL: request: mp_malloc_sync 00:04:39.389 EAL: No shared files mode enabled, IPC is disabled 00:04:39.389 EAL: Heap on socket 0 was shrunk by 34MB 00:04:39.389 EAL: Trying to obtain current memory policy. 00:04:39.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.389 EAL: Restoring previous memory policy: 4 00:04:39.390 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.390 EAL: request: mp_malloc_sync 00:04:39.390 EAL: No shared files mode enabled, IPC is disabled 00:04:39.390 EAL: Heap on socket 0 was expanded by 66MB 00:04:39.390 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.390 EAL: request: mp_malloc_sync 00:04:39.390 EAL: No shared files mode enabled, IPC is disabled 00:04:39.390 EAL: Heap on socket 0 was shrunk by 66MB 00:04:39.390 EAL: Trying to obtain current memory policy. 00:04:39.390 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.390 EAL: Restoring previous memory policy: 4 00:04:39.390 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.390 EAL: request: mp_malloc_sync 00:04:39.390 EAL: No shared files mode enabled, IPC is disabled 00:04:39.390 EAL: Heap on socket 0 was expanded by 130MB 00:04:39.390 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.390 EAL: request: mp_malloc_sync 00:04:39.390 EAL: No shared files mode enabled, IPC is disabled 00:04:39.390 EAL: Heap on socket 0 was shrunk by 130MB 00:04:39.390 EAL: Trying to obtain current memory policy. 00:04:39.390 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.646 EAL: Restoring previous memory policy: 4 00:04:39.646 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.646 EAL: request: mp_malloc_sync 00:04:39.646 EAL: No shared files mode enabled, IPC is disabled 00:04:39.646 EAL: Heap on socket 0 was expanded by 258MB 00:04:39.646 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.646 EAL: request: mp_malloc_sync 00:04:39.646 EAL: No shared files mode enabled, IPC is disabled 00:04:39.646 EAL: Heap on socket 0 was shrunk by 258MB 00:04:39.646 EAL: Trying to obtain current memory policy. 00:04:39.646 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.646 EAL: Restoring previous memory policy: 4 00:04:39.646 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.646 EAL: request: mp_malloc_sync 00:04:39.646 EAL: No shared files mode enabled, IPC is disabled 00:04:39.646 EAL: Heap on socket 0 was expanded by 514MB 00:04:39.904 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.904 EAL: request: mp_malloc_sync 00:04:39.904 EAL: No shared files mode enabled, IPC is disabled 00:04:39.904 EAL: Heap on socket 0 was shrunk by 514MB 00:04:39.904 EAL: Trying to obtain current memory policy. 00:04:39.904 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.471 EAL: Restoring previous memory policy: 4 00:04:40.471 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.471 EAL: request: mp_malloc_sync 00:04:40.471 EAL: No shared files mode enabled, IPC is disabled 00:04:40.471 EAL: Heap on socket 0 was expanded by 1026MB 00:04:40.730 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.989 passed 00:04:40.989 00:04:40.989 Run Summary: Type Total Ran Passed Failed Inactive 00:04:40.989 suites 1 1 n/a 0 0 00:04:40.989 tests 2 2 2 0 0 00:04:40.989 asserts 5302 5302 5302 0 n/a 00:04:40.989 00:04:40.989 Elapsed time = 1.587 seconds 00:04:40.989 EAL: request: mp_malloc_sync 00:04:40.989 EAL: No shared files mode enabled, IPC is disabled 00:04:40.989 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:40.989 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.989 EAL: request: mp_malloc_sync 00:04:40.989 EAL: No shared files mode enabled, IPC is disabled 00:04:40.989 EAL: Heap on socket 0 was shrunk by 2MB 00:04:40.989 EAL: No shared files mode enabled, IPC is disabled 00:04:40.989 EAL: No shared files mode enabled, IPC is disabled 00:04:40.989 EAL: No shared files mode enabled, IPC is disabled 00:04:40.989 00:04:40.989 real 0m1.795s 00:04:40.989 user 0m1.019s 00:04:40.989 sys 0m0.638s 00:04:40.989 08:00:02 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:40.989 08:00:02 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:40.989 ************************************ 00:04:40.989 END TEST env_vtophys 00:04:40.989 ************************************ 00:04:40.989 08:00:02 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:40.989 08:00:02 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:40.989 08:00:02 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:40.989 08:00:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:40.989 ************************************ 00:04:40.989 START TEST env_pci 00:04:40.989 ************************************ 00:04:40.989 08:00:02 env.env_pci -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:40.989 00:04:40.989 00:04:40.989 CUnit - A unit testing framework for C - Version 2.1-3 00:04:40.989 http://cunit.sourceforge.net/ 00:04:40.989 00:04:40.989 00:04:40.989 Suite: pci 00:04:40.989 Test: pci_hook ...[2024-06-10 08:00:02.741137] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58526 has claimed it 00:04:40.989 passed 00:04:40.989 00:04:40.989 Run Summary: Type Total Ran Passed Failed Inactive 00:04:40.989 suites 1 1 n/a 0 0 00:04:40.989 tests 1 1 1 0 0 00:04:40.989 asserts 25 25 25 0 n/a 00:04:40.989 00:04:40.989 Elapsed time = 0.002 seconds 00:04:40.989 EAL: Cannot find device (10000:00:01.0) 00:04:40.989 EAL: Failed to attach device on primary process 00:04:40.989 00:04:40.989 real 0m0.020s 00:04:40.989 user 0m0.009s 00:04:40.989 sys 0m0.011s 00:04:40.989 08:00:02 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:40.989 08:00:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:40.989 ************************************ 00:04:40.989 END TEST env_pci 00:04:40.989 ************************************ 00:04:40.989 08:00:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:40.989 08:00:02 env -- env/env.sh@15 -- # uname 00:04:40.989 08:00:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:40.989 08:00:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:40.990 08:00:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:40.990 08:00:02 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:04:40.990 08:00:02 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:40.990 08:00:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:40.990 ************************************ 00:04:40.990 START TEST env_dpdk_post_init 00:04:40.990 ************************************ 00:04:40.990 08:00:02 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:40.990 EAL: Detected CPU lcores: 10 00:04:40.990 EAL: Detected NUMA nodes: 1 00:04:40.990 EAL: Detected shared linkage of DPDK 00:04:40.990 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:40.990 EAL: Selected IOVA mode 'PA' 00:04:41.247 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:41.248 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:41.248 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:41.248 Starting DPDK initialization... 00:04:41.248 Starting SPDK post initialization... 00:04:41.248 SPDK NVMe probe 00:04:41.248 Attaching to 0000:00:10.0 00:04:41.248 Attaching to 0000:00:11.0 00:04:41.248 Attached to 0000:00:10.0 00:04:41.248 Attached to 0000:00:11.0 00:04:41.248 Cleaning up... 00:04:41.248 00:04:41.248 real 0m0.170s 00:04:41.248 user 0m0.031s 00:04:41.248 sys 0m0.040s 00:04:41.248 08:00:02 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:41.248 ************************************ 00:04:41.248 END TEST env_dpdk_post_init 00:04:41.248 ************************************ 00:04:41.248 08:00:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.248 08:00:03 env -- env/env.sh@26 -- # uname 00:04:41.248 08:00:03 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:41.248 08:00:03 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.248 08:00:03 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:41.248 08:00:03 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:41.248 08:00:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.248 ************************************ 00:04:41.248 START TEST env_mem_callbacks 00:04:41.248 ************************************ 00:04:41.248 08:00:03 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.248 EAL: Detected CPU lcores: 10 00:04:41.248 EAL: Detected NUMA nodes: 1 00:04:41.248 EAL: Detected shared linkage of DPDK 00:04:41.248 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:41.248 EAL: Selected IOVA mode 'PA' 00:04:41.506 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:41.506 00:04:41.506 00:04:41.506 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.506 http://cunit.sourceforge.net/ 00:04:41.506 00:04:41.506 00:04:41.506 Suite: memory 00:04:41.506 Test: test ... 00:04:41.506 register 0x200000200000 2097152 00:04:41.506 malloc 3145728 00:04:41.506 register 0x200000400000 4194304 00:04:41.506 buf 0x200000500000 len 3145728 PASSED 00:04:41.506 malloc 64 00:04:41.506 buf 0x2000004fff40 len 64 PASSED 00:04:41.506 malloc 4194304 00:04:41.506 register 0x200000800000 6291456 00:04:41.506 buf 0x200000a00000 len 4194304 PASSED 00:04:41.506 free 0x200000500000 3145728 00:04:41.506 free 0x2000004fff40 64 00:04:41.506 unregister 0x200000400000 4194304 PASSED 00:04:41.506 free 0x200000a00000 4194304 00:04:41.506 unregister 0x200000800000 6291456 PASSED 00:04:41.506 malloc 8388608 00:04:41.506 register 0x200000400000 10485760 00:04:41.506 buf 0x200000600000 len 8388608 PASSED 00:04:41.506 free 0x200000600000 8388608 00:04:41.506 unregister 0x200000400000 10485760 PASSED 00:04:41.506 passed 00:04:41.506 00:04:41.506 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.506 suites 1 1 n/a 0 0 00:04:41.506 tests 1 1 1 0 0 00:04:41.506 asserts 15 15 15 0 n/a 00:04:41.506 00:04:41.506 Elapsed time = 0.010 seconds 00:04:41.506 00:04:41.506 real 0m0.147s 00:04:41.506 user 0m0.020s 00:04:41.506 sys 0m0.026s 00:04:41.506 08:00:03 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:41.506 08:00:03 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:41.506 ************************************ 00:04:41.506 END TEST env_mem_callbacks 00:04:41.506 ************************************ 00:04:41.506 00:04:41.506 real 0m2.692s 00:04:41.506 user 0m1.369s 00:04:41.506 sys 0m0.959s 00:04:41.506 08:00:03 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:41.506 ************************************ 00:04:41.506 08:00:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.506 END TEST env 00:04:41.506 ************************************ 00:04:41.506 08:00:03 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:41.506 08:00:03 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:41.506 08:00:03 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:41.506 08:00:03 -- common/autotest_common.sh@10 -- # set +x 00:04:41.506 ************************************ 00:04:41.506 START TEST rpc 00:04:41.506 ************************************ 00:04:41.506 08:00:03 rpc -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:41.506 * Looking for test storage... 00:04:41.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:41.506 08:00:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58635 00:04:41.506 08:00:03 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:41.506 08:00:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.506 08:00:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58635 00:04:41.506 08:00:03 rpc -- common/autotest_common.sh@830 -- # '[' -z 58635 ']' 00:04:41.506 08:00:03 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.506 08:00:03 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:41.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.506 08:00:03 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.506 08:00:03 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:41.506 08:00:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.767 [2024-06-10 08:00:03.429153] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:04:41.767 [2024-06-10 08:00:03.429279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58635 ] 00:04:41.767 [2024-06-10 08:00:03.562900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.025 [2024-06-10 08:00:03.680230] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:42.025 [2024-06-10 08:00:03.680285] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58635' to capture a snapshot of events at runtime. 00:04:42.025 [2024-06-10 08:00:03.680295] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:42.025 [2024-06-10 08:00:03.680303] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:42.025 [2024-06-10 08:00:03.680310] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58635 for offline analysis/debug. 00:04:42.025 [2024-06-10 08:00:03.680333] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.025 [2024-06-10 08:00:03.747447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:42.598 08:00:04 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:42.598 08:00:04 rpc -- common/autotest_common.sh@863 -- # return 0 00:04:42.598 08:00:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:42.598 08:00:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:42.598 08:00:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:42.598 08:00:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:42.598 08:00:04 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:42.598 08:00:04 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:42.598 08:00:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.598 ************************************ 00:04:42.598 START TEST rpc_integrity 00:04:42.598 ************************************ 00:04:42.598 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:04:42.598 08:00:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:42.598 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:42.598 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.598 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:42.598 08:00:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:42.598 08:00:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:42.856 08:00:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:42.856 08:00:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:42.856 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:42.856 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.856 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:42.856 08:00:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:42.856 08:00:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:42.856 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:42.856 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.856 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:42.856 08:00:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:42.856 { 00:04:42.856 "name": "Malloc0", 00:04:42.856 "aliases": [ 00:04:42.856 "67cde3c4-5505-4477-a8fd-09e6ce90c3c6" 00:04:42.856 ], 00:04:42.856 "product_name": "Malloc disk", 00:04:42.856 "block_size": 512, 00:04:42.856 "num_blocks": 16384, 00:04:42.856 "uuid": "67cde3c4-5505-4477-a8fd-09e6ce90c3c6", 00:04:42.856 "assigned_rate_limits": { 00:04:42.856 "rw_ios_per_sec": 0, 00:04:42.856 "rw_mbytes_per_sec": 0, 00:04:42.856 "r_mbytes_per_sec": 0, 00:04:42.856 "w_mbytes_per_sec": 0 00:04:42.856 }, 00:04:42.856 "claimed": false, 00:04:42.856 "zoned": false, 00:04:42.856 "supported_io_types": { 00:04:42.856 "read": true, 00:04:42.856 "write": true, 00:04:42.856 "unmap": true, 00:04:42.856 "write_zeroes": true, 00:04:42.856 "flush": true, 00:04:42.856 "reset": true, 00:04:42.856 "compare": false, 00:04:42.856 "compare_and_write": false, 00:04:42.856 "abort": true, 00:04:42.856 "nvme_admin": false, 00:04:42.856 "nvme_io": false 00:04:42.856 }, 00:04:42.856 "memory_domains": [ 00:04:42.856 { 00:04:42.856 "dma_device_id": "system", 00:04:42.856 "dma_device_type": 1 00:04:42.856 }, 00:04:42.856 { 00:04:42.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.856 "dma_device_type": 2 00:04:42.856 } 00:04:42.856 ], 00:04:42.856 "driver_specific": {} 00:04:42.856 } 00:04:42.856 ]' 00:04:42.856 08:00:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:42.856 08:00:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:42.856 08:00:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:42.856 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:42.856 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.856 [2024-06-10 08:00:04.604493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:42.856 [2024-06-10 08:00:04.604542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:42.856 [2024-06-10 08:00:04.604559] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9843d0 00:04:42.856 [2024-06-10 08:00:04.604583] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:42.856 [2024-06-10 08:00:04.606336] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:42.856 [2024-06-10 08:00:04.606371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:42.856 Passthru0 00:04:42.856 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:42.856 08:00:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:42.856 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:42.856 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.856 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:42.856 08:00:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:42.856 { 00:04:42.856 "name": "Malloc0", 00:04:42.856 "aliases": [ 00:04:42.856 "67cde3c4-5505-4477-a8fd-09e6ce90c3c6" 00:04:42.856 ], 00:04:42.856 "product_name": "Malloc disk", 00:04:42.856 "block_size": 512, 00:04:42.856 "num_blocks": 16384, 00:04:42.856 "uuid": "67cde3c4-5505-4477-a8fd-09e6ce90c3c6", 00:04:42.856 "assigned_rate_limits": { 00:04:42.856 "rw_ios_per_sec": 0, 00:04:42.856 "rw_mbytes_per_sec": 0, 00:04:42.856 "r_mbytes_per_sec": 0, 00:04:42.856 "w_mbytes_per_sec": 0 00:04:42.856 }, 00:04:42.856 "claimed": true, 00:04:42.856 "claim_type": "exclusive_write", 00:04:42.856 "zoned": false, 00:04:42.856 "supported_io_types": { 00:04:42.856 "read": true, 00:04:42.856 "write": true, 00:04:42.856 "unmap": true, 00:04:42.856 "write_zeroes": true, 00:04:42.856 "flush": true, 00:04:42.856 "reset": true, 00:04:42.856 "compare": false, 00:04:42.856 "compare_and_write": false, 00:04:42.856 "abort": true, 00:04:42.856 "nvme_admin": false, 00:04:42.856 "nvme_io": false 00:04:42.856 }, 00:04:42.856 "memory_domains": [ 00:04:42.856 { 00:04:42.856 "dma_device_id": "system", 00:04:42.856 "dma_device_type": 1 00:04:42.856 }, 00:04:42.856 { 00:04:42.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.856 "dma_device_type": 2 00:04:42.856 } 00:04:42.856 ], 00:04:42.856 "driver_specific": {} 00:04:42.856 }, 00:04:42.856 { 00:04:42.856 "name": "Passthru0", 00:04:42.856 "aliases": [ 00:04:42.856 "e4858f34-44d5-5689-a92f-ad332b7908d0" 00:04:42.856 ], 00:04:42.856 "product_name": "passthru", 00:04:42.856 "block_size": 512, 00:04:42.856 "num_blocks": 16384, 00:04:42.856 "uuid": "e4858f34-44d5-5689-a92f-ad332b7908d0", 00:04:42.856 "assigned_rate_limits": { 00:04:42.856 "rw_ios_per_sec": 0, 00:04:42.856 "rw_mbytes_per_sec": 0, 00:04:42.856 "r_mbytes_per_sec": 0, 00:04:42.856 "w_mbytes_per_sec": 0 00:04:42.856 }, 00:04:42.856 "claimed": false, 00:04:42.856 "zoned": false, 00:04:42.856 "supported_io_types": { 00:04:42.856 "read": true, 00:04:42.856 "write": true, 00:04:42.856 "unmap": true, 00:04:42.856 "write_zeroes": true, 00:04:42.856 "flush": true, 00:04:42.856 "reset": true, 00:04:42.856 "compare": false, 00:04:42.856 "compare_and_write": false, 00:04:42.856 "abort": true, 00:04:42.856 "nvme_admin": false, 00:04:42.856 "nvme_io": false 00:04:42.856 }, 00:04:42.856 "memory_domains": [ 00:04:42.856 { 00:04:42.856 "dma_device_id": "system", 00:04:42.856 "dma_device_type": 1 00:04:42.856 }, 00:04:42.856 { 00:04:42.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.856 "dma_device_type": 2 00:04:42.856 } 00:04:42.856 ], 00:04:42.856 "driver_specific": { 00:04:42.856 "passthru": { 00:04:42.856 "name": "Passthru0", 00:04:42.856 "base_bdev_name": "Malloc0" 00:04:42.856 } 00:04:42.856 } 00:04:42.856 } 00:04:42.856 ]' 00:04:42.857 08:00:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:42.857 08:00:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:42.857 08:00:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:42.857 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:42.857 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.857 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:42.857 08:00:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:42.857 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:42.857 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.857 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:42.857 08:00:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:42.857 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:42.857 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.857 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:42.857 08:00:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:42.857 08:00:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:43.116 08:00:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:43.116 00:04:43.116 real 0m0.316s 00:04:43.116 user 0m0.211s 00:04:43.116 sys 0m0.039s 00:04:43.116 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:43.116 08:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.116 ************************************ 00:04:43.116 END TEST rpc_integrity 00:04:43.116 ************************************ 00:04:43.116 08:00:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:43.116 08:00:04 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:43.116 08:00:04 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:43.116 08:00:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.116 ************************************ 00:04:43.116 START TEST rpc_plugins 00:04:43.116 ************************************ 00:04:43.116 08:00:04 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:04:43.116 08:00:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:43.116 08:00:04 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:43.116 08:00:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.116 08:00:04 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:43.116 08:00:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:43.116 08:00:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:43.116 08:00:04 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:43.116 08:00:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.116 08:00:04 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:43.116 08:00:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:43.116 { 00:04:43.116 "name": "Malloc1", 00:04:43.116 "aliases": [ 00:04:43.116 "ef059ce0-b0fc-4316-b039-218b9e2cbf6e" 00:04:43.116 ], 00:04:43.116 "product_name": "Malloc disk", 00:04:43.116 "block_size": 4096, 00:04:43.116 "num_blocks": 256, 00:04:43.116 "uuid": "ef059ce0-b0fc-4316-b039-218b9e2cbf6e", 00:04:43.116 "assigned_rate_limits": { 00:04:43.116 "rw_ios_per_sec": 0, 00:04:43.116 "rw_mbytes_per_sec": 0, 00:04:43.116 "r_mbytes_per_sec": 0, 00:04:43.116 "w_mbytes_per_sec": 0 00:04:43.116 }, 00:04:43.116 "claimed": false, 00:04:43.116 "zoned": false, 00:04:43.116 "supported_io_types": { 00:04:43.116 "read": true, 00:04:43.116 "write": true, 00:04:43.116 "unmap": true, 00:04:43.116 "write_zeroes": true, 00:04:43.116 "flush": true, 00:04:43.116 "reset": true, 00:04:43.116 "compare": false, 00:04:43.116 "compare_and_write": false, 00:04:43.116 "abort": true, 00:04:43.116 "nvme_admin": false, 00:04:43.116 "nvme_io": false 00:04:43.116 }, 00:04:43.116 "memory_domains": [ 00:04:43.116 { 00:04:43.116 "dma_device_id": "system", 00:04:43.116 "dma_device_type": 1 00:04:43.116 }, 00:04:43.116 { 00:04:43.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.116 "dma_device_type": 2 00:04:43.116 } 00:04:43.116 ], 00:04:43.116 "driver_specific": {} 00:04:43.116 } 00:04:43.116 ]' 00:04:43.116 08:00:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:43.116 08:00:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:43.116 08:00:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:43.116 08:00:04 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:43.116 08:00:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.116 08:00:04 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:43.116 08:00:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:43.116 08:00:04 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:43.116 08:00:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.116 08:00:04 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:43.116 08:00:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:43.116 08:00:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:43.116 08:00:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:43.116 00:04:43.116 real 0m0.162s 00:04:43.116 user 0m0.102s 00:04:43.116 sys 0m0.024s 00:04:43.116 08:00:04 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:43.116 ************************************ 00:04:43.116 END TEST rpc_plugins 00:04:43.116 08:00:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.116 ************************************ 00:04:43.375 08:00:05 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:43.375 08:00:05 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:43.375 08:00:05 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:43.375 08:00:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.375 ************************************ 00:04:43.375 START TEST rpc_trace_cmd_test 00:04:43.375 ************************************ 00:04:43.375 08:00:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:04:43.375 08:00:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:43.375 08:00:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:43.375 08:00:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:43.375 08:00:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:43.375 08:00:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:43.375 08:00:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:43.375 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58635", 00:04:43.375 "tpoint_group_mask": "0x8", 00:04:43.375 "iscsi_conn": { 00:04:43.375 "mask": "0x2", 00:04:43.375 "tpoint_mask": "0x0" 00:04:43.375 }, 00:04:43.375 "scsi": { 00:04:43.375 "mask": "0x4", 00:04:43.375 "tpoint_mask": "0x0" 00:04:43.375 }, 00:04:43.375 "bdev": { 00:04:43.375 "mask": "0x8", 00:04:43.375 "tpoint_mask": "0xffffffffffffffff" 00:04:43.375 }, 00:04:43.375 "nvmf_rdma": { 00:04:43.375 "mask": "0x10", 00:04:43.375 "tpoint_mask": "0x0" 00:04:43.375 }, 00:04:43.375 "nvmf_tcp": { 00:04:43.375 "mask": "0x20", 00:04:43.375 "tpoint_mask": "0x0" 00:04:43.375 }, 00:04:43.375 "ftl": { 00:04:43.375 "mask": "0x40", 00:04:43.375 "tpoint_mask": "0x0" 00:04:43.375 }, 00:04:43.375 "blobfs": { 00:04:43.375 "mask": "0x80", 00:04:43.375 "tpoint_mask": "0x0" 00:04:43.375 }, 00:04:43.375 "dsa": { 00:04:43.375 "mask": "0x200", 00:04:43.375 "tpoint_mask": "0x0" 00:04:43.375 }, 00:04:43.375 "thread": { 00:04:43.375 "mask": "0x400", 00:04:43.375 "tpoint_mask": "0x0" 00:04:43.375 }, 00:04:43.375 "nvme_pcie": { 00:04:43.375 "mask": "0x800", 00:04:43.375 "tpoint_mask": "0x0" 00:04:43.375 }, 00:04:43.375 "iaa": { 00:04:43.375 "mask": "0x1000", 00:04:43.375 "tpoint_mask": "0x0" 00:04:43.375 }, 00:04:43.375 "nvme_tcp": { 00:04:43.375 "mask": "0x2000", 00:04:43.375 "tpoint_mask": "0x0" 00:04:43.375 }, 00:04:43.375 "bdev_nvme": { 00:04:43.375 "mask": "0x4000", 00:04:43.375 "tpoint_mask": "0x0" 00:04:43.375 }, 00:04:43.375 "sock": { 00:04:43.375 "mask": "0x8000", 00:04:43.375 "tpoint_mask": "0x0" 00:04:43.375 } 00:04:43.375 }' 00:04:43.375 08:00:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:43.375 08:00:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:43.375 08:00:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:43.375 08:00:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:43.375 08:00:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:43.375 08:00:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:43.375 08:00:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:43.633 08:00:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:43.633 08:00:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:43.633 08:00:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:43.633 00:04:43.633 real 0m0.284s 00:04:43.633 user 0m0.246s 00:04:43.633 sys 0m0.028s 00:04:43.633 08:00:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:43.633 ************************************ 00:04:43.633 END TEST rpc_trace_cmd_test 00:04:43.633 08:00:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:43.633 ************************************ 00:04:43.633 08:00:05 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:43.633 08:00:05 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:43.633 08:00:05 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:43.633 08:00:05 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:43.633 08:00:05 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:43.633 08:00:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.633 ************************************ 00:04:43.633 START TEST rpc_daemon_integrity 00:04:43.633 ************************************ 00:04:43.633 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:04:43.633 08:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.633 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:43.633 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.633 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:43.633 08:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.633 08:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:43.633 08:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.633 08:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.633 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:43.633 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.633 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:43.634 08:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:43.634 08:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.634 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:43.634 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.634 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:43.634 08:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.634 { 00:04:43.634 "name": "Malloc2", 00:04:43.634 "aliases": [ 00:04:43.634 "2cc4610e-8197-46f7-be64-2968b21e9632" 00:04:43.634 ], 00:04:43.634 "product_name": "Malloc disk", 00:04:43.634 "block_size": 512, 00:04:43.634 "num_blocks": 16384, 00:04:43.634 "uuid": "2cc4610e-8197-46f7-be64-2968b21e9632", 00:04:43.634 "assigned_rate_limits": { 00:04:43.634 "rw_ios_per_sec": 0, 00:04:43.634 "rw_mbytes_per_sec": 0, 00:04:43.634 "r_mbytes_per_sec": 0, 00:04:43.634 "w_mbytes_per_sec": 0 00:04:43.634 }, 00:04:43.634 "claimed": false, 00:04:43.634 "zoned": false, 00:04:43.634 "supported_io_types": { 00:04:43.634 "read": true, 00:04:43.634 "write": true, 00:04:43.634 "unmap": true, 00:04:43.634 "write_zeroes": true, 00:04:43.634 "flush": true, 00:04:43.634 "reset": true, 00:04:43.634 "compare": false, 00:04:43.634 "compare_and_write": false, 00:04:43.634 "abort": true, 00:04:43.634 "nvme_admin": false, 00:04:43.634 "nvme_io": false 00:04:43.634 }, 00:04:43.634 "memory_domains": [ 00:04:43.634 { 00:04:43.634 "dma_device_id": "system", 00:04:43.634 "dma_device_type": 1 00:04:43.634 }, 00:04:43.634 { 00:04:43.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.634 "dma_device_type": 2 00:04:43.634 } 00:04:43.634 ], 00:04:43.634 "driver_specific": {} 00:04:43.634 } 00:04:43.634 ]' 00:04:43.634 08:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.890 [2024-06-10 08:00:05.534813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:43.890 [2024-06-10 08:00:05.534875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.890 [2024-06-10 08:00:05.534893] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x983af0 00:04:43.890 [2024-06-10 08:00:05.534902] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.890 [2024-06-10 08:00:05.536456] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.890 [2024-06-10 08:00:05.536503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.890 Passthru0 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.890 { 00:04:43.890 "name": "Malloc2", 00:04:43.890 "aliases": [ 00:04:43.890 "2cc4610e-8197-46f7-be64-2968b21e9632" 00:04:43.890 ], 00:04:43.890 "product_name": "Malloc disk", 00:04:43.890 "block_size": 512, 00:04:43.890 "num_blocks": 16384, 00:04:43.890 "uuid": "2cc4610e-8197-46f7-be64-2968b21e9632", 00:04:43.890 "assigned_rate_limits": { 00:04:43.890 "rw_ios_per_sec": 0, 00:04:43.890 "rw_mbytes_per_sec": 0, 00:04:43.890 "r_mbytes_per_sec": 0, 00:04:43.890 "w_mbytes_per_sec": 0 00:04:43.890 }, 00:04:43.890 "claimed": true, 00:04:43.890 "claim_type": "exclusive_write", 00:04:43.890 "zoned": false, 00:04:43.890 "supported_io_types": { 00:04:43.890 "read": true, 00:04:43.890 "write": true, 00:04:43.890 "unmap": true, 00:04:43.890 "write_zeroes": true, 00:04:43.890 "flush": true, 00:04:43.890 "reset": true, 00:04:43.890 "compare": false, 00:04:43.890 "compare_and_write": false, 00:04:43.890 "abort": true, 00:04:43.890 "nvme_admin": false, 00:04:43.890 "nvme_io": false 00:04:43.890 }, 00:04:43.890 "memory_domains": [ 00:04:43.890 { 00:04:43.890 "dma_device_id": "system", 00:04:43.890 "dma_device_type": 1 00:04:43.890 }, 00:04:43.890 { 00:04:43.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.890 "dma_device_type": 2 00:04:43.890 } 00:04:43.890 ], 00:04:43.890 "driver_specific": {} 00:04:43.890 }, 00:04:43.890 { 00:04:43.890 "name": "Passthru0", 00:04:43.890 "aliases": [ 00:04:43.890 "b1b89f8b-b424-5ff9-a396-bcec05f07547" 00:04:43.890 ], 00:04:43.890 "product_name": "passthru", 00:04:43.890 "block_size": 512, 00:04:43.890 "num_blocks": 16384, 00:04:43.890 "uuid": "b1b89f8b-b424-5ff9-a396-bcec05f07547", 00:04:43.890 "assigned_rate_limits": { 00:04:43.890 "rw_ios_per_sec": 0, 00:04:43.890 "rw_mbytes_per_sec": 0, 00:04:43.890 "r_mbytes_per_sec": 0, 00:04:43.890 "w_mbytes_per_sec": 0 00:04:43.890 }, 00:04:43.890 "claimed": false, 00:04:43.890 "zoned": false, 00:04:43.890 "supported_io_types": { 00:04:43.890 "read": true, 00:04:43.890 "write": true, 00:04:43.890 "unmap": true, 00:04:43.890 "write_zeroes": true, 00:04:43.890 "flush": true, 00:04:43.890 "reset": true, 00:04:43.890 "compare": false, 00:04:43.890 "compare_and_write": false, 00:04:43.890 "abort": true, 00:04:43.890 "nvme_admin": false, 00:04:43.890 "nvme_io": false 00:04:43.890 }, 00:04:43.890 "memory_domains": [ 00:04:43.890 { 00:04:43.890 "dma_device_id": "system", 00:04:43.890 "dma_device_type": 1 00:04:43.890 }, 00:04:43.890 { 00:04:43.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.890 "dma_device_type": 2 00:04:43.890 } 00:04:43.890 ], 00:04:43.890 "driver_specific": { 00:04:43.890 "passthru": { 00:04:43.890 "name": "Passthru0", 00:04:43.890 "base_bdev_name": "Malloc2" 00:04:43.890 } 00:04:43.890 } 00:04:43.890 } 00:04:43.890 ]' 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:43.890 00:04:43.890 real 0m0.344s 00:04:43.890 user 0m0.233s 00:04:43.890 sys 0m0.047s 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:43.890 ************************************ 00:04:43.890 08:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.890 END TEST rpc_daemon_integrity 00:04:43.890 ************************************ 00:04:44.158 08:00:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:44.158 08:00:05 rpc -- rpc/rpc.sh@84 -- # killprocess 58635 00:04:44.158 08:00:05 rpc -- common/autotest_common.sh@949 -- # '[' -z 58635 ']' 00:04:44.158 08:00:05 rpc -- common/autotest_common.sh@953 -- # kill -0 58635 00:04:44.158 08:00:05 rpc -- common/autotest_common.sh@954 -- # uname 00:04:44.158 08:00:05 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:44.158 08:00:05 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 58635 00:04:44.158 08:00:05 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:44.158 08:00:05 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:44.158 killing process with pid 58635 00:04:44.158 08:00:05 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 58635' 00:04:44.158 08:00:05 rpc -- common/autotest_common.sh@968 -- # kill 58635 00:04:44.158 08:00:05 rpc -- common/autotest_common.sh@973 -- # wait 58635 00:04:44.724 ************************************ 00:04:44.724 END TEST rpc 00:04:44.724 ************************************ 00:04:44.724 00:04:44.724 real 0m3.025s 00:04:44.724 user 0m3.840s 00:04:44.724 sys 0m0.776s 00:04:44.724 08:00:06 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:44.724 08:00:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.724 08:00:06 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:44.724 08:00:06 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:44.724 08:00:06 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:44.724 08:00:06 -- common/autotest_common.sh@10 -- # set +x 00:04:44.724 ************************************ 00:04:44.724 START TEST skip_rpc 00:04:44.724 ************************************ 00:04:44.724 08:00:06 skip_rpc -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:44.724 * Looking for test storage... 00:04:44.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:44.724 08:00:06 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:44.724 08:00:06 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:44.724 08:00:06 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:44.724 08:00:06 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:44.724 08:00:06 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:44.724 08:00:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.724 ************************************ 00:04:44.724 START TEST skip_rpc 00:04:44.724 ************************************ 00:04:44.724 08:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:04:44.724 08:00:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58833 00:04:44.724 08:00:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.724 08:00:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:44.724 08:00:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:44.724 [2024-06-10 08:00:06.503287] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:04:44.724 [2024-06-10 08:00:06.503428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58833 ] 00:04:44.983 [2024-06-10 08:00:06.645483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.983 [2024-06-10 08:00:06.768831] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.983 [2024-06-10 08:00:06.832680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58833 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 58833 ']' 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 58833 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 58833 00:04:50.305 killing process with pid 58833 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 58833' 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 58833 00:04:50.305 08:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 58833 00:04:50.305 00:04:50.305 real 0m5.587s 00:04:50.305 user 0m5.135s 00:04:50.305 sys 0m0.352s 00:04:50.305 08:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:50.305 08:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.305 ************************************ 00:04:50.305 END TEST skip_rpc 00:04:50.305 ************************************ 00:04:50.305 08:00:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:50.305 08:00:12 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:50.305 08:00:12 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:50.305 08:00:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.305 ************************************ 00:04:50.305 START TEST skip_rpc_with_json 00:04:50.305 ************************************ 00:04:50.305 08:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:04:50.305 08:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:50.305 08:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58920 00:04:50.305 08:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.305 08:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.305 08:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58920 00:04:50.305 08:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 58920 ']' 00:04:50.305 08:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.305 08:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:50.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.305 08:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.305 08:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:50.305 08:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.305 [2024-06-10 08:00:12.162077] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:04:50.305 [2024-06-10 08:00:12.162239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58920 ] 00:04:50.563 [2024-06-10 08:00:12.304459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.822 [2024-06-10 08:00:12.437757] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.822 [2024-06-10 08:00:12.512025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:51.391 08:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:51.391 08:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:04:51.391 08:00:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:51.391 08:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:51.391 08:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.391 [2024-06-10 08:00:13.125353] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:51.391 request: 00:04:51.391 { 00:04:51.391 "trtype": "tcp", 00:04:51.391 "method": "nvmf_get_transports", 00:04:51.391 "req_id": 1 00:04:51.391 } 00:04:51.391 Got JSON-RPC error response 00:04:51.391 response: 00:04:51.391 { 00:04:51.391 "code": -19, 00:04:51.391 "message": "No such device" 00:04:51.391 } 00:04:51.391 08:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:51.391 08:00:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:51.391 08:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:51.391 08:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.391 [2024-06-10 08:00:13.137431] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:51.391 08:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:51.391 08:00:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:51.391 08:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:51.391 08:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.650 08:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:51.650 08:00:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:51.650 { 00:04:51.650 "subsystems": [ 00:04:51.650 { 00:04:51.650 "subsystem": "keyring", 00:04:51.650 "config": [] 00:04:51.650 }, 00:04:51.650 { 00:04:51.650 "subsystem": "iobuf", 00:04:51.650 "config": [ 00:04:51.650 { 00:04:51.650 "method": "iobuf_set_options", 00:04:51.650 "params": { 00:04:51.650 "small_pool_count": 8192, 00:04:51.650 "large_pool_count": 1024, 00:04:51.650 "small_bufsize": 8192, 00:04:51.650 "large_bufsize": 135168 00:04:51.650 } 00:04:51.650 } 00:04:51.650 ] 00:04:51.650 }, 00:04:51.650 { 00:04:51.650 "subsystem": "sock", 00:04:51.650 "config": [ 00:04:51.650 { 00:04:51.650 "method": "sock_set_default_impl", 00:04:51.650 "params": { 00:04:51.651 "impl_name": "uring" 00:04:51.651 } 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "method": "sock_impl_set_options", 00:04:51.651 "params": { 00:04:51.651 "impl_name": "ssl", 00:04:51.651 "recv_buf_size": 4096, 00:04:51.651 "send_buf_size": 4096, 00:04:51.651 "enable_recv_pipe": true, 00:04:51.651 "enable_quickack": false, 00:04:51.651 "enable_placement_id": 0, 00:04:51.651 "enable_zerocopy_send_server": true, 00:04:51.651 "enable_zerocopy_send_client": false, 00:04:51.651 "zerocopy_threshold": 0, 00:04:51.651 "tls_version": 0, 00:04:51.651 "enable_ktls": false 00:04:51.651 } 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "method": "sock_impl_set_options", 00:04:51.651 "params": { 00:04:51.651 "impl_name": "posix", 00:04:51.651 "recv_buf_size": 2097152, 00:04:51.651 "send_buf_size": 2097152, 00:04:51.651 "enable_recv_pipe": true, 00:04:51.651 "enable_quickack": false, 00:04:51.651 "enable_placement_id": 0, 00:04:51.651 "enable_zerocopy_send_server": true, 00:04:51.651 "enable_zerocopy_send_client": false, 00:04:51.651 "zerocopy_threshold": 0, 00:04:51.651 "tls_version": 0, 00:04:51.651 "enable_ktls": false 00:04:51.651 } 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "method": "sock_impl_set_options", 00:04:51.651 "params": { 00:04:51.651 "impl_name": "uring", 00:04:51.651 "recv_buf_size": 2097152, 00:04:51.651 "send_buf_size": 2097152, 00:04:51.651 "enable_recv_pipe": true, 00:04:51.651 "enable_quickack": false, 00:04:51.651 "enable_placement_id": 0, 00:04:51.651 "enable_zerocopy_send_server": false, 00:04:51.651 "enable_zerocopy_send_client": false, 00:04:51.651 "zerocopy_threshold": 0, 00:04:51.651 "tls_version": 0, 00:04:51.651 "enable_ktls": false 00:04:51.651 } 00:04:51.651 } 00:04:51.651 ] 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "subsystem": "vmd", 00:04:51.651 "config": [] 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "subsystem": "accel", 00:04:51.651 "config": [ 00:04:51.651 { 00:04:51.651 "method": "accel_set_options", 00:04:51.651 "params": { 00:04:51.651 "small_cache_size": 128, 00:04:51.651 "large_cache_size": 16, 00:04:51.651 "task_count": 2048, 00:04:51.651 "sequence_count": 2048, 00:04:51.651 "buf_count": 2048 00:04:51.651 } 00:04:51.651 } 00:04:51.651 ] 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "subsystem": "bdev", 00:04:51.651 "config": [ 00:04:51.651 { 00:04:51.651 "method": "bdev_set_options", 00:04:51.651 "params": { 00:04:51.651 "bdev_io_pool_size": 65535, 00:04:51.651 "bdev_io_cache_size": 256, 00:04:51.651 "bdev_auto_examine": true, 00:04:51.651 "iobuf_small_cache_size": 128, 00:04:51.651 "iobuf_large_cache_size": 16 00:04:51.651 } 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "method": "bdev_raid_set_options", 00:04:51.651 "params": { 00:04:51.651 "process_window_size_kb": 1024 00:04:51.651 } 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "method": "bdev_iscsi_set_options", 00:04:51.651 "params": { 00:04:51.651 "timeout_sec": 30 00:04:51.651 } 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "method": "bdev_nvme_set_options", 00:04:51.651 "params": { 00:04:51.651 "action_on_timeout": "none", 00:04:51.651 "timeout_us": 0, 00:04:51.651 "timeout_admin_us": 0, 00:04:51.651 "keep_alive_timeout_ms": 10000, 00:04:51.651 "arbitration_burst": 0, 00:04:51.651 "low_priority_weight": 0, 00:04:51.651 "medium_priority_weight": 0, 00:04:51.651 "high_priority_weight": 0, 00:04:51.651 "nvme_adminq_poll_period_us": 10000, 00:04:51.651 "nvme_ioq_poll_period_us": 0, 00:04:51.651 "io_queue_requests": 0, 00:04:51.651 "delay_cmd_submit": true, 00:04:51.651 "transport_retry_count": 4, 00:04:51.651 "bdev_retry_count": 3, 00:04:51.651 "transport_ack_timeout": 0, 00:04:51.651 "ctrlr_loss_timeout_sec": 0, 00:04:51.651 "reconnect_delay_sec": 0, 00:04:51.651 "fast_io_fail_timeout_sec": 0, 00:04:51.651 "disable_auto_failback": false, 00:04:51.651 "generate_uuids": false, 00:04:51.651 "transport_tos": 0, 00:04:51.651 "nvme_error_stat": false, 00:04:51.651 "rdma_srq_size": 0, 00:04:51.651 "io_path_stat": false, 00:04:51.651 "allow_accel_sequence": false, 00:04:51.651 "rdma_max_cq_size": 0, 00:04:51.651 "rdma_cm_event_timeout_ms": 0, 00:04:51.651 "dhchap_digests": [ 00:04:51.651 "sha256", 00:04:51.651 "sha384", 00:04:51.651 "sha512" 00:04:51.651 ], 00:04:51.651 "dhchap_dhgroups": [ 00:04:51.651 "null", 00:04:51.651 "ffdhe2048", 00:04:51.651 "ffdhe3072", 00:04:51.651 "ffdhe4096", 00:04:51.651 "ffdhe6144", 00:04:51.651 "ffdhe8192" 00:04:51.651 ] 00:04:51.651 } 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "method": "bdev_nvme_set_hotplug", 00:04:51.651 "params": { 00:04:51.651 "period_us": 100000, 00:04:51.651 "enable": false 00:04:51.651 } 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "method": "bdev_wait_for_examine" 00:04:51.651 } 00:04:51.651 ] 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "subsystem": "scsi", 00:04:51.651 "config": null 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "subsystem": "scheduler", 00:04:51.651 "config": [ 00:04:51.651 { 00:04:51.651 "method": "framework_set_scheduler", 00:04:51.651 "params": { 00:04:51.651 "name": "static" 00:04:51.651 } 00:04:51.651 } 00:04:51.651 ] 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "subsystem": "vhost_scsi", 00:04:51.651 "config": [] 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "subsystem": "vhost_blk", 00:04:51.651 "config": [] 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "subsystem": "ublk", 00:04:51.651 "config": [] 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "subsystem": "nbd", 00:04:51.651 "config": [] 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "subsystem": "nvmf", 00:04:51.651 "config": [ 00:04:51.651 { 00:04:51.651 "method": "nvmf_set_config", 00:04:51.651 "params": { 00:04:51.651 "discovery_filter": "match_any", 00:04:51.651 "admin_cmd_passthru": { 00:04:51.651 "identify_ctrlr": false 00:04:51.651 } 00:04:51.651 } 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "method": "nvmf_set_max_subsystems", 00:04:51.651 "params": { 00:04:51.651 "max_subsystems": 1024 00:04:51.651 } 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "method": "nvmf_set_crdt", 00:04:51.651 "params": { 00:04:51.651 "crdt1": 0, 00:04:51.651 "crdt2": 0, 00:04:51.651 "crdt3": 0 00:04:51.651 } 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "method": "nvmf_create_transport", 00:04:51.651 "params": { 00:04:51.651 "trtype": "TCP", 00:04:51.651 "max_queue_depth": 128, 00:04:51.651 "max_io_qpairs_per_ctrlr": 127, 00:04:51.651 "in_capsule_data_size": 4096, 00:04:51.651 "max_io_size": 131072, 00:04:51.651 "io_unit_size": 131072, 00:04:51.651 "max_aq_depth": 128, 00:04:51.651 "num_shared_buffers": 511, 00:04:51.651 "buf_cache_size": 4294967295, 00:04:51.651 "dif_insert_or_strip": false, 00:04:51.651 "zcopy": false, 00:04:51.651 "c2h_success": true, 00:04:51.651 "sock_priority": 0, 00:04:51.651 "abort_timeout_sec": 1, 00:04:51.651 "ack_timeout": 0, 00:04:51.651 "data_wr_pool_size": 0 00:04:51.651 } 00:04:51.651 } 00:04:51.651 ] 00:04:51.651 }, 00:04:51.651 { 00:04:51.651 "subsystem": "iscsi", 00:04:51.651 "config": [ 00:04:51.651 { 00:04:51.651 "method": "iscsi_set_options", 00:04:51.651 "params": { 00:04:51.651 "node_base": "iqn.2016-06.io.spdk", 00:04:51.651 "max_sessions": 128, 00:04:51.651 "max_connections_per_session": 2, 00:04:51.651 "max_queue_depth": 64, 00:04:51.651 "default_time2wait": 2, 00:04:51.651 "default_time2retain": 20, 00:04:51.651 "first_burst_length": 8192, 00:04:51.651 "immediate_data": true, 00:04:51.651 "allow_duplicated_isid": false, 00:04:51.651 "error_recovery_level": 0, 00:04:51.651 "nop_timeout": 60, 00:04:51.651 "nop_in_interval": 30, 00:04:51.651 "disable_chap": false, 00:04:51.651 "require_chap": false, 00:04:51.651 "mutual_chap": false, 00:04:51.651 "chap_group": 0, 00:04:51.651 "max_large_datain_per_connection": 64, 00:04:51.651 "max_r2t_per_connection": 4, 00:04:51.651 "pdu_pool_size": 36864, 00:04:51.651 "immediate_data_pool_size": 16384, 00:04:51.651 "data_out_pool_size": 2048 00:04:51.651 } 00:04:51.651 } 00:04:51.651 ] 00:04:51.651 } 00:04:51.651 ] 00:04:51.651 } 00:04:51.651 08:00:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:51.651 08:00:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58920 00:04:51.651 08:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 58920 ']' 00:04:51.651 08:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 58920 00:04:51.651 08:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:04:51.651 08:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:51.651 08:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 58920 00:04:51.651 killing process with pid 58920 00:04:51.651 08:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:51.651 08:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:51.652 08:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 58920' 00:04:51.652 08:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 58920 00:04:51.652 08:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 58920 00:04:52.220 08:00:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58953 00:04:52.220 08:00:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:52.220 08:00:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:57.495 08:00:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58953 00:04:57.495 08:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 58953 ']' 00:04:57.495 08:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 58953 00:04:57.495 08:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:04:57.495 08:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:57.495 08:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 58953 00:04:57.495 killing process with pid 58953 00:04:57.495 08:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:57.495 08:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:57.495 08:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 58953' 00:04:57.495 08:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 58953 00:04:57.495 08:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 58953 00:04:57.754 08:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:57.754 08:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:57.754 00:04:57.754 real 0m7.358s 00:04:57.754 user 0m6.879s 00:04:57.754 sys 0m0.863s 00:04:57.754 08:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:57.754 08:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.754 ************************************ 00:04:57.754 END TEST skip_rpc_with_json 00:04:57.754 ************************************ 00:04:57.754 08:00:19 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:57.754 08:00:19 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:57.754 08:00:19 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:57.754 08:00:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.754 ************************************ 00:04:57.754 START TEST skip_rpc_with_delay 00:04:57.754 ************************************ 00:04:57.755 08:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:04:57.755 08:00:19 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.755 08:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:04:57.755 08:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.755 08:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.755 08:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:57.755 08:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.755 08:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:57.755 08:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.755 08:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:57.755 08:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.755 08:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:57.755 08:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.755 [2024-06-10 08:00:19.565371] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:57.755 [2024-06-10 08:00:19.565519] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:57.755 08:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:04:57.755 08:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:57.755 08:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:57.755 08:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:57.755 00:04:57.755 real 0m0.091s 00:04:57.755 user 0m0.063s 00:04:57.755 sys 0m0.027s 00:04:57.755 08:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:57.755 ************************************ 00:04:57.755 END TEST skip_rpc_with_delay 00:04:57.755 ************************************ 00:04:57.755 08:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:58.013 08:00:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:58.013 08:00:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:58.013 08:00:19 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:58.013 08:00:19 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:58.013 08:00:19 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:58.013 08:00:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.013 ************************************ 00:04:58.013 START TEST exit_on_failed_rpc_init 00:04:58.013 ************************************ 00:04:58.013 08:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:04:58.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.013 08:00:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59057 00:04:58.013 08:00:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59057 00:04:58.014 08:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 59057 ']' 00:04:58.014 08:00:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.014 08:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.014 08:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:58.014 08:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.014 08:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:58.014 08:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.014 [2024-06-10 08:00:19.709517] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:04:58.014 [2024-06-10 08:00:19.710167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59057 ] 00:04:58.014 [2024-06-10 08:00:19.846089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.308 [2024-06-10 08:00:19.958937] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.308 [2024-06-10 08:00:20.027308] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:58.878 08:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:58.878 08:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:04:58.878 08:00:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.878 08:00:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.878 08:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:04:58.878 08:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.878 08:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.878 08:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:58.878 08:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.878 08:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:58.879 08:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.879 08:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:58.879 08:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.879 08:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:58.879 08:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.140 [2024-06-10 08:00:20.779126] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:04:59.140 [2024-06-10 08:00:20.779215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59075 ] 00:04:59.140 [2024-06-10 08:00:20.918345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.398 [2024-06-10 08:00:21.087243] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.398 [2024-06-10 08:00:21.087619] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:59.398 [2024-06-10 08:00:21.087818] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:59.398 [2024-06-10 08:00:21.087955] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:59.398 08:00:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:04:59.398 08:00:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:59.398 08:00:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:04:59.398 08:00:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:04:59.398 08:00:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:04:59.398 08:00:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:59.398 08:00:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:59.398 08:00:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59057 00:04:59.398 08:00:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 59057 ']' 00:04:59.398 08:00:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 59057 00:04:59.398 08:00:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:04:59.398 08:00:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:59.398 08:00:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 59057 00:04:59.657 08:00:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:59.657 08:00:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:59.657 08:00:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 59057' 00:04:59.657 killing process with pid 59057 00:04:59.657 08:00:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 59057 00:04:59.657 08:00:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 59057 00:05:00.223 00:05:00.223 real 0m2.169s 00:05:00.223 user 0m2.542s 00:05:00.223 sys 0m0.531s 00:05:00.223 08:00:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:00.223 ************************************ 00:05:00.223 END TEST exit_on_failed_rpc_init 00:05:00.223 ************************************ 00:05:00.223 08:00:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.223 08:00:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:00.223 ************************************ 00:05:00.223 END TEST skip_rpc 00:05:00.223 ************************************ 00:05:00.223 00:05:00.223 real 0m15.504s 00:05:00.223 user 0m14.707s 00:05:00.223 sys 0m1.970s 00:05:00.223 08:00:21 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:00.223 08:00:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.223 08:00:21 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:00.223 08:00:21 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:00.223 08:00:21 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:00.223 08:00:21 -- common/autotest_common.sh@10 -- # set +x 00:05:00.223 ************************************ 00:05:00.223 START TEST rpc_client 00:05:00.223 ************************************ 00:05:00.223 08:00:21 rpc_client -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:00.223 * Looking for test storage... 00:05:00.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:00.223 08:00:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:00.223 OK 00:05:00.223 08:00:22 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:00.223 00:05:00.223 real 0m0.105s 00:05:00.223 user 0m0.046s 00:05:00.223 sys 0m0.065s 00:05:00.223 ************************************ 00:05:00.223 END TEST rpc_client 00:05:00.223 ************************************ 00:05:00.223 08:00:22 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:00.223 08:00:22 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:00.223 08:00:22 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:00.223 08:00:22 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:00.223 08:00:22 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:00.223 08:00:22 -- common/autotest_common.sh@10 -- # set +x 00:05:00.223 ************************************ 00:05:00.223 START TEST json_config 00:05:00.223 ************************************ 00:05:00.223 08:00:22 json_config -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:00.481 08:00:22 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:00.481 08:00:22 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:00.481 08:00:22 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:00.481 08:00:22 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:00.481 08:00:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.481 08:00:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.481 08:00:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.481 08:00:22 json_config -- paths/export.sh@5 -- # export PATH 00:05:00.481 08:00:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@47 -- # : 0 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:00.481 08:00:22 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:00.481 08:00:22 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:00.481 08:00:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:00.481 08:00:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:00.481 08:00:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:00.481 08:00:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:00.481 08:00:22 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:00.481 08:00:22 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:00.481 08:00:22 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:00.481 08:00:22 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:00.481 08:00:22 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:00.481 08:00:22 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:00.481 08:00:22 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:00.481 08:00:22 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:00.481 08:00:22 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:00.481 08:00:22 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:00.481 08:00:22 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:00.481 INFO: JSON configuration test init 00:05:00.481 08:00:22 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:00.481 08:00:22 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:00.481 08:00:22 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:00.481 08:00:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.481 08:00:22 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:00.481 08:00:22 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:00.481 08:00:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.481 Waiting for target to run... 00:05:00.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:00.481 08:00:22 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:00.481 08:00:22 json_config -- json_config/common.sh@9 -- # local app=target 00:05:00.481 08:00:22 json_config -- json_config/common.sh@10 -- # shift 00:05:00.481 08:00:22 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:00.481 08:00:22 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:00.481 08:00:22 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:00.481 08:00:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.481 08:00:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.481 08:00:22 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59204 00:05:00.481 08:00:22 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:00.481 08:00:22 json_config -- json_config/common.sh@25 -- # waitforlisten 59204 /var/tmp/spdk_tgt.sock 00:05:00.481 08:00:22 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:00.481 08:00:22 json_config -- common/autotest_common.sh@830 -- # '[' -z 59204 ']' 00:05:00.481 08:00:22 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:00.481 08:00:22 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:00.481 08:00:22 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:00.481 08:00:22 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:00.481 08:00:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.481 [2024-06-10 08:00:22.220337] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:05:00.481 [2024-06-10 08:00:22.221014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59204 ] 00:05:01.048 [2024-06-10 08:00:22.733243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.048 [2024-06-10 08:00:22.853039] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.615 08:00:23 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:01.615 08:00:23 json_config -- common/autotest_common.sh@863 -- # return 0 00:05:01.615 08:00:23 json_config -- json_config/common.sh@26 -- # echo '' 00:05:01.615 00:05:01.615 08:00:23 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:01.615 08:00:23 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:01.615 08:00:23 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:01.615 08:00:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.615 08:00:23 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:01.615 08:00:23 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:01.615 08:00:23 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:01.615 08:00:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.615 08:00:23 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:01.615 08:00:23 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:01.615 08:00:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:01.874 [2024-06-10 08:00:23.561799] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:02.133 08:00:23 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:02.133 08:00:23 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:02.133 08:00:23 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:02.133 08:00:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.133 08:00:23 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:02.133 08:00:23 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:02.133 08:00:23 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:02.133 08:00:23 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:02.133 08:00:23 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:02.133 08:00:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:02.393 08:00:24 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:02.393 08:00:24 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:02.393 08:00:24 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:02.393 08:00:24 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:02.393 08:00:24 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:02.394 08:00:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.394 08:00:24 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:02.394 08:00:24 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:02.394 08:00:24 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:02.394 08:00:24 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:02.394 08:00:24 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:02.394 08:00:24 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:02.394 08:00:24 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:02.394 08:00:24 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:02.394 08:00:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.394 08:00:24 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:02.394 08:00:24 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:02.394 08:00:24 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:02.394 08:00:24 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:02.394 08:00:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:02.657 MallocForNvmf0 00:05:02.657 08:00:24 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:02.657 08:00:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:02.914 MallocForNvmf1 00:05:02.914 08:00:24 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:02.914 08:00:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:03.172 [2024-06-10 08:00:24.886673] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:03.172 08:00:24 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:03.172 08:00:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:03.431 08:00:25 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:03.431 08:00:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:03.689 08:00:25 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:03.689 08:00:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:03.948 08:00:25 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:03.948 08:00:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:04.207 [2024-06-10 08:00:25.867253] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:04.207 08:00:25 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:04.207 08:00:25 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:04.207 08:00:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.207 08:00:25 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:04.207 08:00:25 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:04.207 08:00:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.207 08:00:25 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:04.207 08:00:25 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:04.207 08:00:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:04.465 MallocBdevForConfigChangeCheck 00:05:04.465 08:00:26 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:04.465 08:00:26 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:04.465 08:00:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.465 08:00:26 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:04.465 08:00:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:05.034 08:00:26 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:05.034 INFO: shutting down applications... 00:05:05.034 08:00:26 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:05.034 08:00:26 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:05.034 08:00:26 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:05.034 08:00:26 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:05.294 Calling clear_iscsi_subsystem 00:05:05.294 Calling clear_nvmf_subsystem 00:05:05.294 Calling clear_nbd_subsystem 00:05:05.294 Calling clear_ublk_subsystem 00:05:05.294 Calling clear_vhost_blk_subsystem 00:05:05.294 Calling clear_vhost_scsi_subsystem 00:05:05.294 Calling clear_bdev_subsystem 00:05:05.294 08:00:26 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:05.294 08:00:26 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:05.294 08:00:26 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:05.294 08:00:26 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:05.294 08:00:26 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:05.294 08:00:26 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:05.553 08:00:27 json_config -- json_config/json_config.sh@345 -- # break 00:05:05.553 08:00:27 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:05.553 08:00:27 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:05.553 08:00:27 json_config -- json_config/common.sh@31 -- # local app=target 00:05:05.553 08:00:27 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:05.553 08:00:27 json_config -- json_config/common.sh@35 -- # [[ -n 59204 ]] 00:05:05.553 08:00:27 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59204 00:05:05.553 08:00:27 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:05.553 08:00:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.553 08:00:27 json_config -- json_config/common.sh@41 -- # kill -0 59204 00:05:05.553 08:00:27 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.120 08:00:27 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.120 08:00:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.120 08:00:27 json_config -- json_config/common.sh@41 -- # kill -0 59204 00:05:06.120 SPDK target shutdown done 00:05:06.120 08:00:27 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:06.120 08:00:27 json_config -- json_config/common.sh@43 -- # break 00:05:06.120 08:00:27 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:06.120 08:00:27 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:06.120 INFO: relaunching applications... 00:05:06.120 08:00:27 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:06.120 08:00:27 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:06.120 08:00:27 json_config -- json_config/common.sh@9 -- # local app=target 00:05:06.120 08:00:27 json_config -- json_config/common.sh@10 -- # shift 00:05:06.120 08:00:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:06.120 08:00:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:06.120 08:00:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:06.120 08:00:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.120 08:00:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.120 08:00:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59389 00:05:06.120 Waiting for target to run... 00:05:06.120 08:00:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:06.120 08:00:27 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:06.120 08:00:27 json_config -- json_config/common.sh@25 -- # waitforlisten 59389 /var/tmp/spdk_tgt.sock 00:05:06.120 08:00:27 json_config -- common/autotest_common.sh@830 -- # '[' -z 59389 ']' 00:05:06.120 08:00:27 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:06.120 08:00:27 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:06.120 08:00:27 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:06.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:06.120 08:00:27 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:06.120 08:00:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.120 [2024-06-10 08:00:27.961051] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:05:06.120 [2024-06-10 08:00:27.961190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59389 ] 00:05:06.687 [2024-06-10 08:00:28.427949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.687 [2024-06-10 08:00:28.540217] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.946 [2024-06-10 08:00:28.666022] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:07.204 [2024-06-10 08:00:28.868042] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:07.205 [2024-06-10 08:00:28.900183] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:07.205 00:05:07.205 INFO: Checking if target configuration is the same... 00:05:07.205 08:00:28 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:07.205 08:00:28 json_config -- common/autotest_common.sh@863 -- # return 0 00:05:07.205 08:00:28 json_config -- json_config/common.sh@26 -- # echo '' 00:05:07.205 08:00:28 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:07.205 08:00:28 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:07.205 08:00:28 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:07.205 08:00:28 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:07.205 08:00:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.205 + '[' 2 -ne 2 ']' 00:05:07.205 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:07.205 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:07.205 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:07.205 +++ basename /dev/fd/62 00:05:07.205 ++ mktemp /tmp/62.XXX 00:05:07.205 + tmp_file_1=/tmp/62.Xic 00:05:07.205 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:07.205 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:07.205 + tmp_file_2=/tmp/spdk_tgt_config.json.oDd 00:05:07.205 + ret=0 00:05:07.205 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:07.463 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:07.721 + diff -u /tmp/62.Xic /tmp/spdk_tgt_config.json.oDd 00:05:07.721 INFO: JSON config files are the same 00:05:07.721 + echo 'INFO: JSON config files are the same' 00:05:07.721 + rm /tmp/62.Xic /tmp/spdk_tgt_config.json.oDd 00:05:07.721 + exit 0 00:05:07.721 INFO: changing configuration and checking if this can be detected... 00:05:07.721 08:00:29 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:07.721 08:00:29 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:07.721 08:00:29 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:07.721 08:00:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:07.978 08:00:29 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:07.978 08:00:29 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:07.978 08:00:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.978 + '[' 2 -ne 2 ']' 00:05:07.978 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:07.978 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:07.978 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:07.978 +++ basename /dev/fd/62 00:05:07.978 ++ mktemp /tmp/62.XXX 00:05:07.978 + tmp_file_1=/tmp/62.9Zr 00:05:07.978 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:07.978 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:07.978 + tmp_file_2=/tmp/spdk_tgt_config.json.qMv 00:05:07.978 + ret=0 00:05:07.978 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:08.235 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:08.235 + diff -u /tmp/62.9Zr /tmp/spdk_tgt_config.json.qMv 00:05:08.235 + ret=1 00:05:08.235 + echo '=== Start of file: /tmp/62.9Zr ===' 00:05:08.235 + cat /tmp/62.9Zr 00:05:08.235 + echo '=== End of file: /tmp/62.9Zr ===' 00:05:08.235 + echo '' 00:05:08.235 + echo '=== Start of file: /tmp/spdk_tgt_config.json.qMv ===' 00:05:08.235 + cat /tmp/spdk_tgt_config.json.qMv 00:05:08.235 + echo '=== End of file: /tmp/spdk_tgt_config.json.qMv ===' 00:05:08.235 + echo '' 00:05:08.235 + rm /tmp/62.9Zr /tmp/spdk_tgt_config.json.qMv 00:05:08.235 + exit 1 00:05:08.235 INFO: configuration change detected. 00:05:08.236 08:00:30 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:08.236 08:00:30 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:08.236 08:00:30 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:08.236 08:00:30 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:08.236 08:00:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.236 08:00:30 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:08.236 08:00:30 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:08.236 08:00:30 json_config -- json_config/json_config.sh@317 -- # [[ -n 59389 ]] 00:05:08.236 08:00:30 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:08.236 08:00:30 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:08.236 08:00:30 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:08.236 08:00:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.498 08:00:30 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:08.498 08:00:30 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:08.498 08:00:30 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:08.498 08:00:30 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:08.499 08:00:30 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:08.499 08:00:30 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:08.499 08:00:30 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:08.499 08:00:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.499 08:00:30 json_config -- json_config/json_config.sh@323 -- # killprocess 59389 00:05:08.499 08:00:30 json_config -- common/autotest_common.sh@949 -- # '[' -z 59389 ']' 00:05:08.499 08:00:30 json_config -- common/autotest_common.sh@953 -- # kill -0 59389 00:05:08.499 08:00:30 json_config -- common/autotest_common.sh@954 -- # uname 00:05:08.499 08:00:30 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:08.499 08:00:30 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 59389 00:05:08.499 08:00:30 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:08.499 killing process with pid 59389 00:05:08.499 08:00:30 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:08.499 08:00:30 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 59389' 00:05:08.499 08:00:30 json_config -- common/autotest_common.sh@968 -- # kill 59389 00:05:08.499 08:00:30 json_config -- common/autotest_common.sh@973 -- # wait 59389 00:05:08.756 08:00:30 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:08.756 08:00:30 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:08.756 08:00:30 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:08.756 08:00:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.756 08:00:30 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:08.756 INFO: Success 00:05:08.756 08:00:30 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:08.756 ************************************ 00:05:08.756 END TEST json_config 00:05:08.757 ************************************ 00:05:08.757 00:05:08.757 real 0m8.538s 00:05:08.757 user 0m12.080s 00:05:08.757 sys 0m1.915s 00:05:08.757 08:00:30 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:08.757 08:00:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.015 08:00:30 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:09.015 08:00:30 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:09.015 08:00:30 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:09.015 08:00:30 -- common/autotest_common.sh@10 -- # set +x 00:05:09.015 ************************************ 00:05:09.015 START TEST json_config_extra_key 00:05:09.015 ************************************ 00:05:09.015 08:00:30 json_config_extra_key -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:09.015 08:00:30 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:09.015 08:00:30 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.015 08:00:30 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.015 08:00:30 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.015 08:00:30 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.015 08:00:30 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.015 08:00:30 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.015 08:00:30 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:09.015 08:00:30 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.015 08:00:30 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:09.016 08:00:30 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:09.016 08:00:30 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:09.016 INFO: launching applications... 00:05:09.016 08:00:30 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:09.016 08:00:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:09.016 08:00:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:09.016 08:00:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:09.016 08:00:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:09.016 08:00:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:09.016 08:00:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:09.016 08:00:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:09.016 08:00:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:09.016 08:00:30 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:09.016 08:00:30 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:09.016 08:00:30 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:09.016 08:00:30 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:09.016 08:00:30 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:09.016 08:00:30 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:09.016 08:00:30 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:09.016 08:00:30 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:09.016 08:00:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.016 08:00:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.016 08:00:30 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59535 00:05:09.016 08:00:30 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:09.016 Waiting for target to run... 00:05:09.016 08:00:30 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59535 /var/tmp/spdk_tgt.sock 00:05:09.016 08:00:30 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:09.016 08:00:30 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 59535 ']' 00:05:09.016 08:00:30 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:09.016 08:00:30 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:09.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:09.016 08:00:30 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:09.016 08:00:30 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:09.016 08:00:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:09.016 [2024-06-10 08:00:30.790917] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:05:09.016 [2024-06-10 08:00:30.791245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59535 ] 00:05:09.581 [2024-06-10 08:00:31.246830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.581 [2024-06-10 08:00:31.356665] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.581 [2024-06-10 08:00:31.377530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:10.148 08:00:31 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:10.148 00:05:10.148 INFO: shutting down applications... 00:05:10.148 08:00:31 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:05:10.148 08:00:31 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:10.148 08:00:31 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:10.148 08:00:31 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:10.148 08:00:31 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:10.148 08:00:31 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:10.148 08:00:31 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59535 ]] 00:05:10.148 08:00:31 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59535 00:05:10.148 08:00:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:10.148 08:00:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.148 08:00:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59535 00:05:10.148 08:00:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:10.714 08:00:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:10.714 08:00:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.714 08:00:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59535 00:05:10.714 08:00:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:10.971 08:00:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:10.971 08:00:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.971 08:00:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59535 00:05:10.971 08:00:32 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:10.971 08:00:32 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:10.971 SPDK target shutdown done 00:05:10.971 Success 00:05:10.971 08:00:32 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:10.971 08:00:32 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:10.971 08:00:32 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:10.971 00:05:10.971 real 0m2.169s 00:05:10.971 user 0m1.728s 00:05:10.971 sys 0m0.463s 00:05:10.971 ************************************ 00:05:10.971 END TEST json_config_extra_key 00:05:10.971 ************************************ 00:05:10.971 08:00:32 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:10.971 08:00:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:11.240 08:00:32 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:11.240 08:00:32 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:11.240 08:00:32 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:11.240 08:00:32 -- common/autotest_common.sh@10 -- # set +x 00:05:11.240 ************************************ 00:05:11.240 START TEST alias_rpc 00:05:11.240 ************************************ 00:05:11.240 08:00:32 alias_rpc -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:11.240 * Looking for test storage... 00:05:11.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:11.240 08:00:32 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:11.240 08:00:32 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59606 00:05:11.240 08:00:32 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59606 00:05:11.240 08:00:32 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:11.240 08:00:32 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 59606 ']' 00:05:11.240 08:00:32 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.240 08:00:32 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:11.240 08:00:32 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.240 08:00:32 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:11.240 08:00:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.240 [2024-06-10 08:00:32.998948] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:05:11.240 [2024-06-10 08:00:32.999029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59606 ] 00:05:11.517 [2024-06-10 08:00:33.131414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.517 [2024-06-10 08:00:33.263953] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.517 [2024-06-10 08:00:33.339716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:12.096 08:00:33 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:12.096 08:00:33 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:12.096 08:00:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:12.661 08:00:34 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59606 00:05:12.661 08:00:34 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 59606 ']' 00:05:12.661 08:00:34 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 59606 00:05:12.661 08:00:34 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:05:12.661 08:00:34 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:12.661 08:00:34 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 59606 00:05:12.661 killing process with pid 59606 00:05:12.661 08:00:34 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:12.661 08:00:34 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:12.661 08:00:34 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 59606' 00:05:12.661 08:00:34 alias_rpc -- common/autotest_common.sh@968 -- # kill 59606 00:05:12.661 08:00:34 alias_rpc -- common/autotest_common.sh@973 -- # wait 59606 00:05:13.228 ************************************ 00:05:13.228 END TEST alias_rpc 00:05:13.228 ************************************ 00:05:13.228 00:05:13.228 real 0m1.956s 00:05:13.228 user 0m2.109s 00:05:13.228 sys 0m0.514s 00:05:13.228 08:00:34 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:13.228 08:00:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.228 08:00:34 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:13.228 08:00:34 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:13.228 08:00:34 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:13.228 08:00:34 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:13.228 08:00:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.228 ************************************ 00:05:13.228 START TEST spdkcli_tcp 00:05:13.228 ************************************ 00:05:13.228 08:00:34 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:13.228 * Looking for test storage... 00:05:13.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:13.228 08:00:34 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:13.228 08:00:34 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:13.228 08:00:34 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:13.228 08:00:34 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:13.228 08:00:34 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:13.228 08:00:34 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:13.228 08:00:34 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:13.228 08:00:34 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:13.228 08:00:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:13.228 08:00:34 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59682 00:05:13.228 08:00:34 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59682 00:05:13.228 08:00:34 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:13.228 08:00:34 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 59682 ']' 00:05:13.228 08:00:34 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.228 08:00:34 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:13.228 08:00:34 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.228 08:00:34 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:13.228 08:00:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:13.228 [2024-06-10 08:00:35.030557] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:05:13.228 [2024-06-10 08:00:35.030673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59682 ] 00:05:13.486 [2024-06-10 08:00:35.170586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.486 [2024-06-10 08:00:35.292590] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.486 [2024-06-10 08:00:35.292599] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.745 [2024-06-10 08:00:35.368803] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:14.310 08:00:35 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:14.310 08:00:35 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:05:14.310 08:00:35 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:14.310 08:00:35 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59699 00:05:14.310 08:00:35 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:14.569 [ 00:05:14.569 "bdev_malloc_delete", 00:05:14.569 "bdev_malloc_create", 00:05:14.569 "bdev_null_resize", 00:05:14.569 "bdev_null_delete", 00:05:14.569 "bdev_null_create", 00:05:14.569 "bdev_nvme_cuse_unregister", 00:05:14.569 "bdev_nvme_cuse_register", 00:05:14.569 "bdev_opal_new_user", 00:05:14.569 "bdev_opal_set_lock_state", 00:05:14.569 "bdev_opal_delete", 00:05:14.569 "bdev_opal_get_info", 00:05:14.569 "bdev_opal_create", 00:05:14.569 "bdev_nvme_opal_revert", 00:05:14.569 "bdev_nvme_opal_init", 00:05:14.569 "bdev_nvme_send_cmd", 00:05:14.569 "bdev_nvme_get_path_iostat", 00:05:14.569 "bdev_nvme_get_mdns_discovery_info", 00:05:14.569 "bdev_nvme_stop_mdns_discovery", 00:05:14.569 "bdev_nvme_start_mdns_discovery", 00:05:14.569 "bdev_nvme_set_multipath_policy", 00:05:14.569 "bdev_nvme_set_preferred_path", 00:05:14.569 "bdev_nvme_get_io_paths", 00:05:14.569 "bdev_nvme_remove_error_injection", 00:05:14.569 "bdev_nvme_add_error_injection", 00:05:14.569 "bdev_nvme_get_discovery_info", 00:05:14.569 "bdev_nvme_stop_discovery", 00:05:14.569 "bdev_nvme_start_discovery", 00:05:14.569 "bdev_nvme_get_controller_health_info", 00:05:14.569 "bdev_nvme_disable_controller", 00:05:14.569 "bdev_nvme_enable_controller", 00:05:14.569 "bdev_nvme_reset_controller", 00:05:14.569 "bdev_nvme_get_transport_statistics", 00:05:14.569 "bdev_nvme_apply_firmware", 00:05:14.569 "bdev_nvme_detach_controller", 00:05:14.569 "bdev_nvme_get_controllers", 00:05:14.569 "bdev_nvme_attach_controller", 00:05:14.569 "bdev_nvme_set_hotplug", 00:05:14.569 "bdev_nvme_set_options", 00:05:14.569 "bdev_passthru_delete", 00:05:14.569 "bdev_passthru_create", 00:05:14.569 "bdev_lvol_set_parent_bdev", 00:05:14.569 "bdev_lvol_set_parent", 00:05:14.569 "bdev_lvol_check_shallow_copy", 00:05:14.569 "bdev_lvol_start_shallow_copy", 00:05:14.569 "bdev_lvol_grow_lvstore", 00:05:14.569 "bdev_lvol_get_lvols", 00:05:14.569 "bdev_lvol_get_lvstores", 00:05:14.569 "bdev_lvol_delete", 00:05:14.569 "bdev_lvol_set_read_only", 00:05:14.569 "bdev_lvol_resize", 00:05:14.569 "bdev_lvol_decouple_parent", 00:05:14.569 "bdev_lvol_inflate", 00:05:14.569 "bdev_lvol_rename", 00:05:14.569 "bdev_lvol_clone_bdev", 00:05:14.569 "bdev_lvol_clone", 00:05:14.569 "bdev_lvol_snapshot", 00:05:14.569 "bdev_lvol_create", 00:05:14.569 "bdev_lvol_delete_lvstore", 00:05:14.569 "bdev_lvol_rename_lvstore", 00:05:14.569 "bdev_lvol_create_lvstore", 00:05:14.569 "bdev_raid_set_options", 00:05:14.569 "bdev_raid_remove_base_bdev", 00:05:14.569 "bdev_raid_add_base_bdev", 00:05:14.569 "bdev_raid_delete", 00:05:14.569 "bdev_raid_create", 00:05:14.569 "bdev_raid_get_bdevs", 00:05:14.569 "bdev_error_inject_error", 00:05:14.569 "bdev_error_delete", 00:05:14.569 "bdev_error_create", 00:05:14.569 "bdev_split_delete", 00:05:14.569 "bdev_split_create", 00:05:14.569 "bdev_delay_delete", 00:05:14.569 "bdev_delay_create", 00:05:14.569 "bdev_delay_update_latency", 00:05:14.569 "bdev_zone_block_delete", 00:05:14.569 "bdev_zone_block_create", 00:05:14.569 "blobfs_create", 00:05:14.569 "blobfs_detect", 00:05:14.569 "blobfs_set_cache_size", 00:05:14.569 "bdev_aio_delete", 00:05:14.569 "bdev_aio_rescan", 00:05:14.569 "bdev_aio_create", 00:05:14.569 "bdev_ftl_set_property", 00:05:14.569 "bdev_ftl_get_properties", 00:05:14.569 "bdev_ftl_get_stats", 00:05:14.569 "bdev_ftl_unmap", 00:05:14.569 "bdev_ftl_unload", 00:05:14.569 "bdev_ftl_delete", 00:05:14.569 "bdev_ftl_load", 00:05:14.569 "bdev_ftl_create", 00:05:14.569 "bdev_virtio_attach_controller", 00:05:14.569 "bdev_virtio_scsi_get_devices", 00:05:14.569 "bdev_virtio_detach_controller", 00:05:14.569 "bdev_virtio_blk_set_hotplug", 00:05:14.569 "bdev_iscsi_delete", 00:05:14.569 "bdev_iscsi_create", 00:05:14.569 "bdev_iscsi_set_options", 00:05:14.569 "bdev_uring_delete", 00:05:14.569 "bdev_uring_rescan", 00:05:14.569 "bdev_uring_create", 00:05:14.569 "accel_error_inject_error", 00:05:14.569 "ioat_scan_accel_module", 00:05:14.569 "dsa_scan_accel_module", 00:05:14.569 "iaa_scan_accel_module", 00:05:14.569 "keyring_file_remove_key", 00:05:14.569 "keyring_file_add_key", 00:05:14.569 "keyring_linux_set_options", 00:05:14.569 "iscsi_get_histogram", 00:05:14.569 "iscsi_enable_histogram", 00:05:14.569 "iscsi_set_options", 00:05:14.569 "iscsi_get_auth_groups", 00:05:14.569 "iscsi_auth_group_remove_secret", 00:05:14.569 "iscsi_auth_group_add_secret", 00:05:14.569 "iscsi_delete_auth_group", 00:05:14.569 "iscsi_create_auth_group", 00:05:14.569 "iscsi_set_discovery_auth", 00:05:14.569 "iscsi_get_options", 00:05:14.569 "iscsi_target_node_request_logout", 00:05:14.569 "iscsi_target_node_set_redirect", 00:05:14.569 "iscsi_target_node_set_auth", 00:05:14.569 "iscsi_target_node_add_lun", 00:05:14.569 "iscsi_get_stats", 00:05:14.569 "iscsi_get_connections", 00:05:14.569 "iscsi_portal_group_set_auth", 00:05:14.569 "iscsi_start_portal_group", 00:05:14.569 "iscsi_delete_portal_group", 00:05:14.569 "iscsi_create_portal_group", 00:05:14.569 "iscsi_get_portal_groups", 00:05:14.570 "iscsi_delete_target_node", 00:05:14.570 "iscsi_target_node_remove_pg_ig_maps", 00:05:14.570 "iscsi_target_node_add_pg_ig_maps", 00:05:14.570 "iscsi_create_target_node", 00:05:14.570 "iscsi_get_target_nodes", 00:05:14.570 "iscsi_delete_initiator_group", 00:05:14.570 "iscsi_initiator_group_remove_initiators", 00:05:14.570 "iscsi_initiator_group_add_initiators", 00:05:14.570 "iscsi_create_initiator_group", 00:05:14.570 "iscsi_get_initiator_groups", 00:05:14.570 "nvmf_set_crdt", 00:05:14.570 "nvmf_set_config", 00:05:14.570 "nvmf_set_max_subsystems", 00:05:14.570 "nvmf_stop_mdns_prr", 00:05:14.570 "nvmf_publish_mdns_prr", 00:05:14.570 "nvmf_subsystem_get_listeners", 00:05:14.570 "nvmf_subsystem_get_qpairs", 00:05:14.570 "nvmf_subsystem_get_controllers", 00:05:14.570 "nvmf_get_stats", 00:05:14.570 "nvmf_get_transports", 00:05:14.570 "nvmf_create_transport", 00:05:14.570 "nvmf_get_targets", 00:05:14.570 "nvmf_delete_target", 00:05:14.570 "nvmf_create_target", 00:05:14.570 "nvmf_subsystem_allow_any_host", 00:05:14.570 "nvmf_subsystem_remove_host", 00:05:14.570 "nvmf_subsystem_add_host", 00:05:14.570 "nvmf_ns_remove_host", 00:05:14.570 "nvmf_ns_add_host", 00:05:14.570 "nvmf_subsystem_remove_ns", 00:05:14.570 "nvmf_subsystem_add_ns", 00:05:14.570 "nvmf_subsystem_listener_set_ana_state", 00:05:14.570 "nvmf_discovery_get_referrals", 00:05:14.570 "nvmf_discovery_remove_referral", 00:05:14.570 "nvmf_discovery_add_referral", 00:05:14.570 "nvmf_subsystem_remove_listener", 00:05:14.570 "nvmf_subsystem_add_listener", 00:05:14.570 "nvmf_delete_subsystem", 00:05:14.570 "nvmf_create_subsystem", 00:05:14.570 "nvmf_get_subsystems", 00:05:14.570 "env_dpdk_get_mem_stats", 00:05:14.570 "nbd_get_disks", 00:05:14.570 "nbd_stop_disk", 00:05:14.570 "nbd_start_disk", 00:05:14.570 "ublk_recover_disk", 00:05:14.570 "ublk_get_disks", 00:05:14.570 "ublk_stop_disk", 00:05:14.570 "ublk_start_disk", 00:05:14.570 "ublk_destroy_target", 00:05:14.570 "ublk_create_target", 00:05:14.570 "virtio_blk_create_transport", 00:05:14.570 "virtio_blk_get_transports", 00:05:14.570 "vhost_controller_set_coalescing", 00:05:14.570 "vhost_get_controllers", 00:05:14.570 "vhost_delete_controller", 00:05:14.570 "vhost_create_blk_controller", 00:05:14.570 "vhost_scsi_controller_remove_target", 00:05:14.570 "vhost_scsi_controller_add_target", 00:05:14.570 "vhost_start_scsi_controller", 00:05:14.570 "vhost_create_scsi_controller", 00:05:14.570 "thread_set_cpumask", 00:05:14.570 "framework_get_scheduler", 00:05:14.570 "framework_set_scheduler", 00:05:14.570 "framework_get_reactors", 00:05:14.570 "thread_get_io_channels", 00:05:14.570 "thread_get_pollers", 00:05:14.570 "thread_get_stats", 00:05:14.570 "framework_monitor_context_switch", 00:05:14.570 "spdk_kill_instance", 00:05:14.570 "log_enable_timestamps", 00:05:14.570 "log_get_flags", 00:05:14.570 "log_clear_flag", 00:05:14.570 "log_set_flag", 00:05:14.570 "log_get_level", 00:05:14.570 "log_set_level", 00:05:14.570 "log_get_print_level", 00:05:14.570 "log_set_print_level", 00:05:14.570 "framework_enable_cpumask_locks", 00:05:14.570 "framework_disable_cpumask_locks", 00:05:14.570 "framework_wait_init", 00:05:14.570 "framework_start_init", 00:05:14.570 "scsi_get_devices", 00:05:14.570 "bdev_get_histogram", 00:05:14.570 "bdev_enable_histogram", 00:05:14.570 "bdev_set_qos_limit", 00:05:14.570 "bdev_set_qd_sampling_period", 00:05:14.570 "bdev_get_bdevs", 00:05:14.570 "bdev_reset_iostat", 00:05:14.570 "bdev_get_iostat", 00:05:14.570 "bdev_examine", 00:05:14.570 "bdev_wait_for_examine", 00:05:14.570 "bdev_set_options", 00:05:14.570 "notify_get_notifications", 00:05:14.570 "notify_get_types", 00:05:14.570 "accel_get_stats", 00:05:14.570 "accel_set_options", 00:05:14.570 "accel_set_driver", 00:05:14.570 "accel_crypto_key_destroy", 00:05:14.570 "accel_crypto_keys_get", 00:05:14.570 "accel_crypto_key_create", 00:05:14.570 "accel_assign_opc", 00:05:14.570 "accel_get_module_info", 00:05:14.570 "accel_get_opc_assignments", 00:05:14.570 "vmd_rescan", 00:05:14.570 "vmd_remove_device", 00:05:14.570 "vmd_enable", 00:05:14.570 "sock_get_default_impl", 00:05:14.570 "sock_set_default_impl", 00:05:14.570 "sock_impl_set_options", 00:05:14.570 "sock_impl_get_options", 00:05:14.570 "iobuf_get_stats", 00:05:14.570 "iobuf_set_options", 00:05:14.570 "framework_get_pci_devices", 00:05:14.570 "framework_get_config", 00:05:14.570 "framework_get_subsystems", 00:05:14.570 "trace_get_info", 00:05:14.570 "trace_get_tpoint_group_mask", 00:05:14.570 "trace_disable_tpoint_group", 00:05:14.570 "trace_enable_tpoint_group", 00:05:14.570 "trace_clear_tpoint_mask", 00:05:14.570 "trace_set_tpoint_mask", 00:05:14.570 "keyring_get_keys", 00:05:14.570 "spdk_get_version", 00:05:14.570 "rpc_get_methods" 00:05:14.570 ] 00:05:14.570 08:00:36 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:14.570 08:00:36 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:14.570 08:00:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.570 08:00:36 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:14.570 08:00:36 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59682 00:05:14.570 08:00:36 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 59682 ']' 00:05:14.570 08:00:36 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 59682 00:05:14.570 08:00:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:05:14.570 08:00:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:14.570 08:00:36 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 59682 00:05:14.570 killing process with pid 59682 00:05:14.570 08:00:36 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:14.570 08:00:36 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:14.570 08:00:36 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 59682' 00:05:14.570 08:00:36 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 59682 00:05:14.570 08:00:36 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 59682 00:05:15.136 ************************************ 00:05:15.136 END TEST spdkcli_tcp 00:05:15.136 ************************************ 00:05:15.136 00:05:15.136 real 0m2.008s 00:05:15.136 user 0m3.627s 00:05:15.136 sys 0m0.567s 00:05:15.136 08:00:36 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:15.136 08:00:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.136 08:00:36 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:15.136 08:00:36 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:15.136 08:00:36 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:15.136 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:05:15.136 ************************************ 00:05:15.136 START TEST dpdk_mem_utility 00:05:15.136 ************************************ 00:05:15.136 08:00:36 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:15.395 * Looking for test storage... 00:05:15.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:15.395 08:00:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:15.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.395 08:00:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59773 00:05:15.395 08:00:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.395 08:00:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59773 00:05:15.395 08:00:37 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 59773 ']' 00:05:15.395 08:00:37 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.395 08:00:37 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:15.395 08:00:37 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.395 08:00:37 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:15.395 08:00:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.395 [2024-06-10 08:00:37.081266] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:05:15.395 [2024-06-10 08:00:37.081365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59773 ] 00:05:15.395 [2024-06-10 08:00:37.218599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.682 [2024-06-10 08:00:37.369538] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.682 [2024-06-10 08:00:37.447651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:16.249 08:00:38 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:16.249 08:00:38 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:05:16.249 08:00:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:16.249 08:00:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:16.249 08:00:38 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:16.249 08:00:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:16.249 { 00:05:16.249 "filename": "/tmp/spdk_mem_dump.txt" 00:05:16.249 } 00:05:16.249 08:00:38 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:16.249 08:00:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:16.249 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:16.249 1 heaps totaling size 814.000000 MiB 00:05:16.249 size: 814.000000 MiB heap id: 0 00:05:16.249 end heaps---------- 00:05:16.249 8 mempools totaling size 598.116089 MiB 00:05:16.249 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:16.249 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:16.249 size: 84.521057 MiB name: bdev_io_59773 00:05:16.249 size: 51.011292 MiB name: evtpool_59773 00:05:16.249 size: 50.003479 MiB name: msgpool_59773 00:05:16.249 size: 21.763794 MiB name: PDU_Pool 00:05:16.249 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:16.249 size: 0.026123 MiB name: Session_Pool 00:05:16.249 end mempools------- 00:05:16.249 6 memzones totaling size 4.142822 MiB 00:05:16.249 size: 1.000366 MiB name: RG_ring_0_59773 00:05:16.249 size: 1.000366 MiB name: RG_ring_1_59773 00:05:16.249 size: 1.000366 MiB name: RG_ring_4_59773 00:05:16.249 size: 1.000366 MiB name: RG_ring_5_59773 00:05:16.249 size: 0.125366 MiB name: RG_ring_2_59773 00:05:16.249 size: 0.015991 MiB name: RG_ring_3_59773 00:05:16.249 end memzones------- 00:05:16.249 08:00:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:16.509 heap id: 0 total size: 814.000000 MiB number of busy elements: 303 number of free elements: 15 00:05:16.509 list of free elements. size: 12.471375 MiB 00:05:16.509 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:16.509 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:16.509 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:16.509 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:16.509 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:16.509 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:16.509 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:16.509 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:16.509 element at address: 0x200000200000 with size: 0.833191 MiB 00:05:16.509 element at address: 0x20001aa00000 with size: 0.568787 MiB 00:05:16.509 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:16.509 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:16.509 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:16.509 element at address: 0x200027e00000 with size: 0.395752 MiB 00:05:16.509 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:16.509 list of standard malloc elements. size: 199.266052 MiB 00:05:16.509 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:16.509 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:16.510 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:16.510 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:16.510 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:16.510 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:16.510 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:16.510 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:16.510 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:16.510 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:16.510 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:16.510 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:16.510 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:16.511 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e65500 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:16.511 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:16.511 list of memzone associated elements. size: 602.262573 MiB 00:05:16.511 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:16.511 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:16.512 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:16.512 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:16.512 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:16.512 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59773_0 00:05:16.512 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:16.512 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59773_0 00:05:16.512 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:16.512 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59773_0 00:05:16.512 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:16.512 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:16.512 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:16.512 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:16.512 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:16.512 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59773 00:05:16.512 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:16.512 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59773 00:05:16.512 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:16.512 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59773 00:05:16.512 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:16.512 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:16.512 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:16.512 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:16.512 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:16.512 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:16.512 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:16.512 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:16.512 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:16.512 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59773 00:05:16.512 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:16.512 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59773 00:05:16.512 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:16.512 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59773 00:05:16.512 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:16.512 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59773 00:05:16.512 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:16.512 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59773 00:05:16.512 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:16.512 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:16.512 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:16.512 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:16.512 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:16.512 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:16.512 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:16.512 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59773 00:05:16.512 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:16.512 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:16.512 element at address: 0x200027e65680 with size: 0.023743 MiB 00:05:16.512 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:16.512 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:16.512 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59773 00:05:16.512 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:05:16.512 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:16.512 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:16.512 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59773 00:05:16.512 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:16.512 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59773 00:05:16.512 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:05:16.512 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:16.512 08:00:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:16.512 08:00:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59773 00:05:16.512 08:00:38 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 59773 ']' 00:05:16.512 08:00:38 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 59773 00:05:16.512 08:00:38 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:05:16.512 08:00:38 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:16.512 08:00:38 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 59773 00:05:16.512 killing process with pid 59773 00:05:16.512 08:00:38 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:16.512 08:00:38 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:16.512 08:00:38 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 59773' 00:05:16.512 08:00:38 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 59773 00:05:16.512 08:00:38 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 59773 00:05:17.080 ************************************ 00:05:17.080 END TEST dpdk_mem_utility 00:05:17.080 ************************************ 00:05:17.080 00:05:17.080 real 0m1.818s 00:05:17.080 user 0m1.827s 00:05:17.080 sys 0m0.504s 00:05:17.080 08:00:38 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:17.080 08:00:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.080 08:00:38 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:17.080 08:00:38 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:17.080 08:00:38 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:17.080 08:00:38 -- common/autotest_common.sh@10 -- # set +x 00:05:17.080 ************************************ 00:05:17.080 START TEST event 00:05:17.080 ************************************ 00:05:17.080 08:00:38 event -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:17.080 * Looking for test storage... 00:05:17.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:17.080 08:00:38 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:17.081 08:00:38 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:17.081 08:00:38 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:17.081 08:00:38 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:05:17.081 08:00:38 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:17.081 08:00:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.081 ************************************ 00:05:17.081 START TEST event_perf 00:05:17.081 ************************************ 00:05:17.081 08:00:38 event.event_perf -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:17.081 Running I/O for 1 seconds...[2024-06-10 08:00:38.924935] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:05:17.081 [2024-06-10 08:00:38.925150] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59850 ] 00:05:17.339 [2024-06-10 08:00:39.058090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:17.339 [2024-06-10 08:00:39.204901] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.339 [2024-06-10 08:00:39.205005] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.339 [2024-06-10 08:00:39.205074] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.339 [2024-06-10 08:00:39.205076] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.713 Running I/O for 1 seconds... 00:05:18.713 lcore 0: 197142 00:05:18.713 lcore 1: 197142 00:05:18.713 lcore 2: 197143 00:05:18.713 lcore 3: 197143 00:05:18.713 done. 00:05:18.713 00:05:18.713 real 0m1.425s 00:05:18.713 user 0m4.213s 00:05:18.713 sys 0m0.083s 00:05:18.713 08:00:40 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:18.713 ************************************ 00:05:18.713 END TEST event_perf 00:05:18.713 08:00:40 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:18.713 ************************************ 00:05:18.713 08:00:40 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:18.713 08:00:40 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:18.713 08:00:40 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:18.713 08:00:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.713 ************************************ 00:05:18.713 START TEST event_reactor 00:05:18.713 ************************************ 00:05:18.713 08:00:40 event.event_reactor -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:18.714 [2024-06-10 08:00:40.406172] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:05:18.714 [2024-06-10 08:00:40.406287] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59889 ] 00:05:18.714 [2024-06-10 08:00:40.543217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.972 [2024-06-10 08:00:40.678903] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.349 test_start 00:05:20.349 oneshot 00:05:20.349 tick 100 00:05:20.349 tick 100 00:05:20.349 tick 250 00:05:20.349 tick 100 00:05:20.349 tick 100 00:05:20.349 tick 100 00:05:20.349 tick 250 00:05:20.349 tick 500 00:05:20.349 tick 100 00:05:20.349 tick 100 00:05:20.349 tick 250 00:05:20.349 tick 100 00:05:20.349 tick 100 00:05:20.349 test_end 00:05:20.349 00:05:20.349 real 0m1.411s 00:05:20.349 user 0m1.231s 00:05:20.349 sys 0m0.074s 00:05:20.349 08:00:41 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:20.349 ************************************ 00:05:20.349 END TEST event_reactor 00:05:20.349 ************************************ 00:05:20.349 08:00:41 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:20.349 08:00:41 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:20.349 08:00:41 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:20.349 08:00:41 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:20.349 08:00:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.349 ************************************ 00:05:20.349 START TEST event_reactor_perf 00:05:20.349 ************************************ 00:05:20.349 08:00:41 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:20.349 [2024-06-10 08:00:41.870850] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:05:20.349 [2024-06-10 08:00:41.870946] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59924 ] 00:05:20.349 [2024-06-10 08:00:42.005950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.349 [2024-06-10 08:00:42.135060] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.726 test_start 00:05:21.726 test_end 00:05:21.726 Performance: 378988 events per second 00:05:21.726 ************************************ 00:05:21.726 END TEST event_reactor_perf 00:05:21.726 ************************************ 00:05:21.726 00:05:21.726 real 0m1.401s 00:05:21.726 user 0m1.222s 00:05:21.726 sys 0m0.072s 00:05:21.726 08:00:43 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:21.726 08:00:43 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:21.726 08:00:43 event -- event/event.sh@49 -- # uname -s 00:05:21.726 08:00:43 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:21.726 08:00:43 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:21.726 08:00:43 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:21.726 08:00:43 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:21.726 08:00:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.726 ************************************ 00:05:21.726 START TEST event_scheduler 00:05:21.727 ************************************ 00:05:21.727 08:00:43 event.event_scheduler -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:21.727 * Looking for test storage... 00:05:21.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:21.727 08:00:43 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:21.727 08:00:43 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59986 00:05:21.727 08:00:43 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:21.727 08:00:43 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.727 08:00:43 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59986 00:05:21.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.727 08:00:43 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 59986 ']' 00:05:21.727 08:00:43 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.727 08:00:43 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:21.727 08:00:43 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.727 08:00:43 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:21.727 08:00:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.727 [2024-06-10 08:00:43.461027] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:05:21.727 [2024-06-10 08:00:43.461353] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59986 ] 00:05:21.986 [2024-06-10 08:00:43.603771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:21.986 [2024-06-10 08:00:43.753063] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.986 [2024-06-10 08:00:43.753187] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.986 [2024-06-10 08:00:43.753333] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:21.986 [2024-06-10 08:00:43.753335] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.923 08:00:44 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:22.923 08:00:44 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:05:22.923 08:00:44 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:22.923 08:00:44 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.923 08:00:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.923 POWER: Env isn't set yet! 00:05:22.923 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:22.923 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.923 POWER: Cannot set governor of lcore 0 to userspace 00:05:22.923 POWER: Attempting to initialise PSTAT power management... 00:05:22.923 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.923 POWER: Cannot set governor of lcore 0 to performance 00:05:22.923 POWER: Attempting to initialise AMD PSTATE power management... 00:05:22.923 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.923 POWER: Cannot set governor of lcore 0 to userspace 00:05:22.923 POWER: Attempting to initialise CPPC power management... 00:05:22.923 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.923 POWER: Cannot set governor of lcore 0 to userspace 00:05:22.923 POWER: Attempting to initialise VM power management... 00:05:22.923 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:22.923 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:22.923 POWER: Unable to set Power Management Environment for lcore 0 00:05:22.923 [2024-06-10 08:00:44.462698] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:22.923 [2024-06-10 08:00:44.462777] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:22.923 [2024-06-10 08:00:44.462869] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:22.923 [2024-06-10 08:00:44.463037] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:22.923 [2024-06-10 08:00:44.463133] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:22.923 [2024-06-10 08:00:44.463199] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:22.923 08:00:44 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.923 08:00:44 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:22.923 08:00:44 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.923 08:00:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.923 [2024-06-10 08:00:44.543681] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:22.923 [2024-06-10 08:00:44.591354] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:22.923 08:00:44 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.923 08:00:44 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:22.923 08:00:44 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:22.923 08:00:44 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:22.923 08:00:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.923 ************************************ 00:05:22.923 START TEST scheduler_create_thread 00:05:22.923 ************************************ 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.923 2 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.923 3 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.923 4 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.923 5 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.923 6 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.923 7 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.923 8 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.923 9 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.923 10 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:22.923 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.924 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.924 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.924 08:00:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:22.924 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.924 08:00:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.331 08:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:24.331 08:00:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:24.331 08:00:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:24.331 08:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:24.331 08:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.710 ************************************ 00:05:25.710 END TEST scheduler_create_thread 00:05:25.710 ************************************ 00:05:25.710 08:00:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.710 00:05:25.710 real 0m2.615s 00:05:25.710 user 0m0.025s 00:05:25.710 sys 0m0.002s 00:05:25.710 08:00:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:25.710 08:00:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.710 08:00:47 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:25.710 08:00:47 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59986 00:05:25.710 08:00:47 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 59986 ']' 00:05:25.710 08:00:47 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 59986 00:05:25.710 08:00:47 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:05:25.710 08:00:47 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:25.710 08:00:47 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 59986 00:05:25.710 killing process with pid 59986 00:05:25.710 08:00:47 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:05:25.710 08:00:47 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:05:25.710 08:00:47 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 59986' 00:05:25.710 08:00:47 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 59986 00:05:25.710 08:00:47 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 59986 00:05:25.968 [2024-06-10 08:00:47.700269] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:26.227 00:05:26.227 real 0m4.726s 00:05:26.227 user 0m8.708s 00:05:26.227 sys 0m0.427s 00:05:26.227 08:00:48 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:26.227 08:00:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.227 ************************************ 00:05:26.227 END TEST event_scheduler 00:05:26.227 ************************************ 00:05:26.227 08:00:48 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:26.227 08:00:48 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:26.227 08:00:48 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:26.227 08:00:48 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:26.227 08:00:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.486 ************************************ 00:05:26.486 START TEST app_repeat 00:05:26.486 ************************************ 00:05:26.486 08:00:48 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:05:26.486 08:00:48 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.486 08:00:48 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.486 08:00:48 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:26.486 08:00:48 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.486 08:00:48 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:26.486 08:00:48 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:26.486 08:00:48 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:26.486 Process app_repeat pid: 60085 00:05:26.486 08:00:48 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60085 00:05:26.486 08:00:48 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.486 08:00:48 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60085' 00:05:26.486 08:00:48 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:26.486 08:00:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:26.486 spdk_app_start Round 0 00:05:26.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.486 08:00:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:26.486 08:00:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60085 /var/tmp/spdk-nbd.sock 00:05:26.486 08:00:48 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 60085 ']' 00:05:26.486 08:00:48 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.486 08:00:48 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:26.486 08:00:48 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.486 08:00:48 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:26.486 08:00:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.486 [2024-06-10 08:00:48.140820] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:05:26.486 [2024-06-10 08:00:48.140978] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60085 ] 00:05:26.486 [2024-06-10 08:00:48.287732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.744 [2024-06-10 08:00:48.435150] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.745 [2024-06-10 08:00:48.435162] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.745 [2024-06-10 08:00:48.512412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:27.312 08:00:49 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:27.312 08:00:49 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:27.312 08:00:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.919 Malloc0 00:05:27.919 08:00:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.919 Malloc1 00:05:27.919 08:00:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.919 08:00:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.919 08:00:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.919 08:00:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.919 08:00:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.919 08:00:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.919 08:00:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.919 08:00:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.919 08:00:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.919 08:00:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.919 08:00:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.919 08:00:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.919 08:00:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:27.919 08:00:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.919 08:00:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.919 08:00:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:28.179 /dev/nbd0 00:05:28.179 08:00:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:28.179 08:00:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:28.179 08:00:50 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:28.179 08:00:50 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:28.179 08:00:50 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:28.179 08:00:50 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:28.179 08:00:50 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:28.437 08:00:50 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:28.437 08:00:50 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:28.437 08:00:50 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:28.437 08:00:50 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.437 1+0 records in 00:05:28.437 1+0 records out 00:05:28.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252171 s, 16.2 MB/s 00:05:28.437 08:00:50 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.437 08:00:50 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:28.437 08:00:50 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.437 08:00:50 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:28.437 08:00:50 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:28.437 08:00:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.437 08:00:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.437 08:00:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:28.437 /dev/nbd1 00:05:28.437 08:00:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:28.437 08:00:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:28.437 08:00:50 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:28.437 08:00:50 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:28.696 08:00:50 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:28.696 08:00:50 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:28.696 08:00:50 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:28.696 08:00:50 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:28.696 08:00:50 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:28.696 08:00:50 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:28.696 08:00:50 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.696 1+0 records in 00:05:28.696 1+0 records out 00:05:28.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000748587 s, 5.5 MB/s 00:05:28.696 08:00:50 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.696 08:00:50 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:28.696 08:00:50 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.696 08:00:50 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:28.696 08:00:50 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:28.696 08:00:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.696 08:00:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.696 08:00:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.696 08:00:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.696 08:00:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.955 { 00:05:28.955 "nbd_device": "/dev/nbd0", 00:05:28.955 "bdev_name": "Malloc0" 00:05:28.955 }, 00:05:28.955 { 00:05:28.955 "nbd_device": "/dev/nbd1", 00:05:28.955 "bdev_name": "Malloc1" 00:05:28.955 } 00:05:28.955 ]' 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.955 { 00:05:28.955 "nbd_device": "/dev/nbd0", 00:05:28.955 "bdev_name": "Malloc0" 00:05:28.955 }, 00:05:28.955 { 00:05:28.955 "nbd_device": "/dev/nbd1", 00:05:28.955 "bdev_name": "Malloc1" 00:05:28.955 } 00:05:28.955 ]' 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.955 /dev/nbd1' 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.955 /dev/nbd1' 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.955 256+0 records in 00:05:28.955 256+0 records out 00:05:28.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102771 s, 102 MB/s 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.955 256+0 records in 00:05:28.955 256+0 records out 00:05:28.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205121 s, 51.1 MB/s 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.955 256+0 records in 00:05:28.955 256+0 records out 00:05:28.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282573 s, 37.1 MB/s 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.955 08:00:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.214 08:00:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.214 08:00:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.214 08:00:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.214 08:00:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.214 08:00:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.214 08:00:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.214 08:00:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.214 08:00:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.214 08:00:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.214 08:00:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.472 08:00:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.472 08:00:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.472 08:00:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.472 08:00:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.472 08:00:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.472 08:00:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.472 08:00:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.472 08:00:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.472 08:00:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.472 08:00:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.472 08:00:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.731 08:00:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.731 08:00:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.731 08:00:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.731 08:00:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.731 08:00:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.731 08:00:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.731 08:00:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:29.731 08:00:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.731 08:00:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.731 08:00:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.731 08:00:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.731 08:00:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.731 08:00:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:30.298 08:00:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:30.555 [2024-06-10 08:00:52.191798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.555 [2024-06-10 08:00:52.288598] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.555 [2024-06-10 08:00:52.288609] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.555 [2024-06-10 08:00:52.363716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:30.555 [2024-06-10 08:00:52.364046] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.555 [2024-06-10 08:00:52.364195] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:33.096 08:00:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:33.096 08:00:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:33.096 spdk_app_start Round 1 00:05:33.096 08:00:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60085 /var/tmp/spdk-nbd.sock 00:05:33.096 08:00:54 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 60085 ']' 00:05:33.096 08:00:54 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.096 08:00:54 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:33.096 08:00:54 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.096 08:00:54 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:33.096 08:00:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.354 08:00:55 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:33.354 08:00:55 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:33.354 08:00:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.921 Malloc0 00:05:33.921 08:00:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.921 Malloc1 00:05:33.921 08:00:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.921 08:00:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.921 08:00:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.921 08:00:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.921 08:00:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.921 08:00:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.921 08:00:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.921 08:00:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.921 08:00:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.921 08:00:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.921 08:00:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.921 08:00:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.921 08:00:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:33.921 08:00:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.921 08:00:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.921 08:00:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:34.180 /dev/nbd0 00:05:34.180 08:00:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:34.180 08:00:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:34.180 08:00:56 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:34.180 08:00:56 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:34.180 08:00:56 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:34.180 08:00:56 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:34.180 08:00:56 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:34.180 08:00:56 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:34.180 08:00:56 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:34.180 08:00:56 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:34.180 08:00:56 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.439 1+0 records in 00:05:34.439 1+0 records out 00:05:34.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373292 s, 11.0 MB/s 00:05:34.439 08:00:56 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.439 08:00:56 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:34.439 08:00:56 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.439 08:00:56 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:34.439 08:00:56 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:34.439 08:00:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.439 08:00:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.439 08:00:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:34.697 /dev/nbd1 00:05:34.697 08:00:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:34.697 08:00:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:34.697 08:00:56 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:34.697 08:00:56 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:34.697 08:00:56 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:34.697 08:00:56 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:34.697 08:00:56 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:34.697 08:00:56 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:34.697 08:00:56 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:34.697 08:00:56 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:34.697 08:00:56 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.697 1+0 records in 00:05:34.697 1+0 records out 00:05:34.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346715 s, 11.8 MB/s 00:05:34.697 08:00:56 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.697 08:00:56 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:34.697 08:00:56 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.697 08:00:56 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:34.697 08:00:56 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:34.697 08:00:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.697 08:00:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.697 08:00:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.697 08:00:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.697 08:00:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.956 08:00:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.956 { 00:05:34.956 "nbd_device": "/dev/nbd0", 00:05:34.956 "bdev_name": "Malloc0" 00:05:34.956 }, 00:05:34.956 { 00:05:34.956 "nbd_device": "/dev/nbd1", 00:05:34.956 "bdev_name": "Malloc1" 00:05:34.956 } 00:05:34.956 ]' 00:05:34.956 08:00:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.956 08:00:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.956 { 00:05:34.956 "nbd_device": "/dev/nbd0", 00:05:34.956 "bdev_name": "Malloc0" 00:05:34.956 }, 00:05:34.956 { 00:05:34.956 "nbd_device": "/dev/nbd1", 00:05:34.956 "bdev_name": "Malloc1" 00:05:34.956 } 00:05:34.956 ]' 00:05:34.956 08:00:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.956 /dev/nbd1' 00:05:34.956 08:00:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.956 /dev/nbd1' 00:05:34.956 08:00:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.956 08:00:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.956 08:00:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.956 08:00:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.956 08:00:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.956 08:00:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.956 08:00:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.956 08:00:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.956 08:00:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.956 08:00:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.956 08:00:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.956 08:00:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.956 256+0 records in 00:05:34.956 256+0 records out 00:05:34.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00909336 s, 115 MB/s 00:05:34.956 08:00:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.956 08:00:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.956 256+0 records in 00:05:34.956 256+0 records out 00:05:34.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252749 s, 41.5 MB/s 00:05:34.956 08:00:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.956 08:00:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:35.215 256+0 records in 00:05:35.215 256+0 records out 00:05:35.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279819 s, 37.5 MB/s 00:05:35.215 08:00:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:35.215 08:00:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.215 08:00:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.215 08:00:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:35.215 08:00:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.215 08:00:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:35.215 08:00:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:35.215 08:00:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.215 08:00:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:35.215 08:00:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.215 08:00:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:35.215 08:00:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.215 08:00:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:35.215 08:00:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.215 08:00:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.215 08:00:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:35.215 08:00:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:35.215 08:00:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.215 08:00:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:35.473 08:00:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:35.473 08:00:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:35.473 08:00:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:35.473 08:00:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.473 08:00:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.473 08:00:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:35.474 08:00:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.474 08:00:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.474 08:00:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.474 08:00:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:35.732 08:00:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:35.732 08:00:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:35.732 08:00:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:35.732 08:00:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.732 08:00:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.732 08:00:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:35.732 08:00:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.732 08:00:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.732 08:00:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.733 08:00:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.733 08:00:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.991 08:00:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:35.991 08:00:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:35.991 08:00:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.991 08:00:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:35.991 08:00:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:35.991 08:00:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.991 08:00:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:35.991 08:00:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:35.991 08:00:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:35.991 08:00:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:35.991 08:00:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:35.991 08:00:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:35.991 08:00:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:36.559 08:00:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:36.559 [2024-06-10 08:00:58.408939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.817 [2024-06-10 08:00:58.535188] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.818 [2024-06-10 08:00:58.535195] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.818 [2024-06-10 08:00:58.612614] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:36.818 [2024-06-10 08:00:58.612702] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:36.818 [2024-06-10 08:00:58.612716] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:39.350 spdk_app_start Round 2 00:05:39.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:39.350 08:01:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:39.350 08:01:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:39.350 08:01:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60085 /var/tmp/spdk-nbd.sock 00:05:39.350 08:01:01 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 60085 ']' 00:05:39.350 08:01:01 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:39.350 08:01:01 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:39.350 08:01:01 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:39.350 08:01:01 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:39.350 08:01:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:39.609 08:01:01 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:39.609 08:01:01 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:39.609 08:01:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.867 Malloc0 00:05:39.867 08:01:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.126 Malloc1 00:05:40.126 08:01:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.126 08:01:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.126 08:01:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.126 08:01:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.126 08:01:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.126 08:01:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.127 08:01:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.127 08:01:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.127 08:01:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.127 08:01:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.127 08:01:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.127 08:01:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.127 08:01:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.127 08:01:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.127 08:01:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.127 08:01:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.385 /dev/nbd0 00:05:40.385 08:01:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.385 08:01:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.385 08:01:02 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:40.385 08:01:02 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:40.385 08:01:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:40.385 08:01:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:40.385 08:01:02 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:40.385 08:01:02 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:40.385 08:01:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:40.385 08:01:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:40.385 08:01:02 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.385 1+0 records in 00:05:40.385 1+0 records out 00:05:40.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269218 s, 15.2 MB/s 00:05:40.385 08:01:02 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.385 08:01:02 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:40.385 08:01:02 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.385 08:01:02 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:40.385 08:01:02 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:40.385 08:01:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.385 08:01:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.385 08:01:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.643 /dev/nbd1 00:05:40.643 08:01:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.643 08:01:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.643 08:01:02 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:40.643 08:01:02 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:40.643 08:01:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:40.643 08:01:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:40.643 08:01:02 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:40.643 08:01:02 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:40.643 08:01:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:40.643 08:01:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:40.643 08:01:02 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.643 1+0 records in 00:05:40.643 1+0 records out 00:05:40.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322859 s, 12.7 MB/s 00:05:40.643 08:01:02 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.643 08:01:02 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:40.643 08:01:02 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.902 08:01:02 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:40.902 08:01:02 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:40.902 08:01:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.902 08:01:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.903 08:01:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.903 08:01:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.903 08:01:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.161 08:01:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.161 { 00:05:41.161 "nbd_device": "/dev/nbd0", 00:05:41.161 "bdev_name": "Malloc0" 00:05:41.161 }, 00:05:41.161 { 00:05:41.161 "nbd_device": "/dev/nbd1", 00:05:41.161 "bdev_name": "Malloc1" 00:05:41.161 } 00:05:41.161 ]' 00:05:41.161 08:01:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.161 { 00:05:41.161 "nbd_device": "/dev/nbd0", 00:05:41.161 "bdev_name": "Malloc0" 00:05:41.161 }, 00:05:41.161 { 00:05:41.161 "nbd_device": "/dev/nbd1", 00:05:41.161 "bdev_name": "Malloc1" 00:05:41.161 } 00:05:41.161 ]' 00:05:41.161 08:01:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.161 08:01:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.161 /dev/nbd1' 00:05:41.161 08:01:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.161 /dev/nbd1' 00:05:41.161 08:01:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.161 08:01:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.161 08:01:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.161 08:01:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.161 08:01:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.161 08:01:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.161 08:01:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.161 08:01:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.161 08:01:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.161 08:01:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.161 08:01:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.161 08:01:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.162 256+0 records in 00:05:41.162 256+0 records out 00:05:41.162 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108688 s, 96.5 MB/s 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.162 256+0 records in 00:05:41.162 256+0 records out 00:05:41.162 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211678 s, 49.5 MB/s 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.162 256+0 records in 00:05:41.162 256+0 records out 00:05:41.162 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248518 s, 42.2 MB/s 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.162 08:01:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.421 08:01:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.421 08:01:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.421 08:01:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.421 08:01:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.421 08:01:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.421 08:01:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.421 08:01:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.421 08:01:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.421 08:01:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.421 08:01:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.679 08:01:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.679 08:01:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.679 08:01:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.679 08:01:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.680 08:01:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.680 08:01:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.680 08:01:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.680 08:01:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.680 08:01:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.680 08:01:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.680 08:01:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.938 08:01:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.938 08:01:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.938 08:01:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.196 08:01:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.196 08:01:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.196 08:01:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.196 08:01:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.196 08:01:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.196 08:01:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.196 08:01:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.196 08:01:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.196 08:01:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.196 08:01:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.455 08:01:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.713 [2024-06-10 08:01:04.426215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.713 [2024-06-10 08:01:04.562394] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.713 [2024-06-10 08:01:04.562405] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.970 [2024-06-10 08:01:04.634680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:42.970 [2024-06-10 08:01:04.634776] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.970 [2024-06-10 08:01:04.634801] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.498 08:01:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60085 /var/tmp/spdk-nbd.sock 00:05:45.498 08:01:07 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 60085 ']' 00:05:45.498 08:01:07 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.498 08:01:07 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:45.498 08:01:07 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.498 08:01:07 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:45.498 08:01:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.756 08:01:07 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:45.757 08:01:07 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:45.757 08:01:07 event.app_repeat -- event/event.sh@39 -- # killprocess 60085 00:05:45.757 08:01:07 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 60085 ']' 00:05:45.757 08:01:07 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 60085 00:05:45.757 08:01:07 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:05:45.757 08:01:07 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:45.757 08:01:07 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 60085 00:05:45.757 killing process with pid 60085 00:05:45.757 08:01:07 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:45.757 08:01:07 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:45.757 08:01:07 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 60085' 00:05:45.757 08:01:07 event.app_repeat -- common/autotest_common.sh@968 -- # kill 60085 00:05:45.757 08:01:07 event.app_repeat -- common/autotest_common.sh@973 -- # wait 60085 00:05:46.015 spdk_app_start is called in Round 0. 00:05:46.015 Shutdown signal received, stop current app iteration 00:05:46.015 Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 reinitialization... 00:05:46.015 spdk_app_start is called in Round 1. 00:05:46.015 Shutdown signal received, stop current app iteration 00:05:46.015 Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 reinitialization... 00:05:46.015 spdk_app_start is called in Round 2. 00:05:46.015 Shutdown signal received, stop current app iteration 00:05:46.015 Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 reinitialization... 00:05:46.015 spdk_app_start is called in Round 3. 00:05:46.015 Shutdown signal received, stop current app iteration 00:05:46.015 ************************************ 00:05:46.015 END TEST app_repeat 00:05:46.015 ************************************ 00:05:46.015 08:01:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:46.015 08:01:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:46.015 00:05:46.015 real 0m19.633s 00:05:46.015 user 0m43.777s 00:05:46.015 sys 0m3.212s 00:05:46.015 08:01:07 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:46.015 08:01:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.015 08:01:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:46.015 08:01:07 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:46.015 08:01:07 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:46.015 08:01:07 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:46.015 08:01:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.015 ************************************ 00:05:46.015 START TEST cpu_locks 00:05:46.015 ************************************ 00:05:46.015 08:01:07 event.cpu_locks -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:46.015 * Looking for test storage... 00:05:46.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:46.015 08:01:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:46.015 08:01:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:46.015 08:01:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:46.015 08:01:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:46.015 08:01:07 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:46.015 08:01:07 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:46.015 08:01:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.015 ************************************ 00:05:46.015 START TEST default_locks 00:05:46.015 ************************************ 00:05:46.015 08:01:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:05:46.015 08:01:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60528 00:05:46.015 08:01:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60528 00:05:46.015 08:01:07 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 60528 ']' 00:05:46.015 08:01:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.015 08:01:07 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.015 08:01:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:46.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.015 08:01:07 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.015 08:01:07 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:46.015 08:01:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.289 [2024-06-10 08:01:07.937323] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:05:46.289 [2024-06-10 08:01:07.937436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60528 ] 00:05:46.289 [2024-06-10 08:01:08.075209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.558 [2024-06-10 08:01:08.218252] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.558 [2024-06-10 08:01:08.291162] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:47.124 08:01:08 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:47.124 08:01:08 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:05:47.124 08:01:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60528 00:05:47.124 08:01:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60528 00:05:47.124 08:01:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.690 08:01:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60528 00:05:47.690 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 60528 ']' 00:05:47.690 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 60528 00:05:47.690 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:05:47.690 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:47.690 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 60528 00:05:47.690 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:47.690 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:47.690 killing process with pid 60528 00:05:47.690 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 60528' 00:05:47.690 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 60528 00:05:47.690 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 60528 00:05:47.948 08:01:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60528 00:05:47.948 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:05:47.948 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 60528 00:05:47.948 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:47.948 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:47.948 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:47.948 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:47.948 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 60528 00:05:47.948 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 60528 ']' 00:05:47.948 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.206 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:48.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.206 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.206 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:48.206 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.206 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 845: kill: (60528) - No such process 00:05:48.206 ERROR: process (pid: 60528) is no longer running 00:05:48.206 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:48.206 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:05:48.206 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:05:48.206 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:48.206 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:48.206 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:48.206 08:01:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:48.206 08:01:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:48.206 08:01:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:48.206 08:01:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:48.206 00:05:48.206 real 0m1.942s 00:05:48.206 user 0m2.000s 00:05:48.206 sys 0m0.596s 00:05:48.206 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:48.206 08:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.206 ************************************ 00:05:48.206 END TEST default_locks 00:05:48.206 ************************************ 00:05:48.206 08:01:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:48.206 08:01:09 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:48.206 08:01:09 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:48.206 08:01:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.207 ************************************ 00:05:48.207 START TEST default_locks_via_rpc 00:05:48.207 ************************************ 00:05:48.207 08:01:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:05:48.207 08:01:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60570 00:05:48.207 08:01:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60570 00:05:48.207 08:01:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.207 08:01:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 60570 ']' 00:05:48.207 08:01:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.207 08:01:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:48.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.207 08:01:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.207 08:01:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:48.207 08:01:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.207 [2024-06-10 08:01:09.927545] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:05:48.207 [2024-06-10 08:01:09.927678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60570 ] 00:05:48.207 [2024-06-10 08:01:10.066284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.464 [2024-06-10 08:01:10.208013] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.464 [2024-06-10 08:01:10.281855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:49.030 08:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:49.031 08:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:49.031 08:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:49.031 08:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:49.031 08:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.289 08:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:49.289 08:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:49.289 08:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:49.289 08:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:49.289 08:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:49.289 08:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:49.289 08:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:49.289 08:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.289 08:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:49.289 08:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60570 00:05:49.289 08:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60570 00:05:49.289 08:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.548 08:01:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60570 00:05:49.548 08:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 60570 ']' 00:05:49.548 08:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 60570 00:05:49.548 08:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:05:49.548 08:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:49.548 08:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 60570 00:05:49.548 08:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:49.548 08:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:49.548 08:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 60570' 00:05:49.548 killing process with pid 60570 00:05:49.548 08:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 60570 00:05:49.548 08:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 60570 00:05:50.116 00:05:50.116 real 0m2.032s 00:05:50.116 user 0m2.116s 00:05:50.116 sys 0m0.631s 00:05:50.116 08:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:50.116 08:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.116 ************************************ 00:05:50.116 END TEST default_locks_via_rpc 00:05:50.116 ************************************ 00:05:50.116 08:01:11 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:50.116 08:01:11 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:50.116 08:01:11 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:50.116 08:01:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.116 ************************************ 00:05:50.116 START TEST non_locking_app_on_locked_coremask 00:05:50.116 ************************************ 00:05:50.116 08:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:05:50.116 08:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60627 00:05:50.116 08:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60627 /var/tmp/spdk.sock 00:05:50.116 08:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.116 08:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 60627 ']' 00:05:50.116 08:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.116 08:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:50.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.116 08:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.116 08:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:50.116 08:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.425 [2024-06-10 08:01:12.011139] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:05:50.425 [2024-06-10 08:01:12.011256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60627 ] 00:05:50.425 [2024-06-10 08:01:12.149007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.682 [2024-06-10 08:01:12.291917] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.682 [2024-06-10 08:01:12.363494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:51.247 08:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:51.247 08:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:51.247 08:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60643 00:05:51.247 08:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:51.248 08:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60643 /var/tmp/spdk2.sock 00:05:51.248 08:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 60643 ']' 00:05:51.248 08:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.248 08:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:51.248 08:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.248 08:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:51.248 08:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.248 [2024-06-10 08:01:13.018611] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:05:51.248 [2024-06-10 08:01:13.018721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60643 ] 00:05:51.506 [2024-06-10 08:01:13.161155] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.506 [2024-06-10 08:01:13.161213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.764 [2024-06-10 08:01:13.440904] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.764 [2024-06-10 08:01:13.545798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:52.331 08:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:52.331 08:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:52.331 08:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60627 00:05:52.331 08:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.331 08:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60627 00:05:53.266 08:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60627 00:05:53.266 08:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 60627 ']' 00:05:53.266 08:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 60627 00:05:53.266 08:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:53.266 08:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:53.266 08:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 60627 00:05:53.266 08:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:53.266 killing process with pid 60627 00:05:53.266 08:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:53.266 08:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 60627' 00:05:53.266 08:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 60627 00:05:53.266 08:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 60627 00:05:54.201 08:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60643 00:05:54.201 08:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 60643 ']' 00:05:54.201 08:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 60643 00:05:54.201 08:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:54.201 08:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:54.201 08:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 60643 00:05:54.201 08:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:54.201 killing process with pid 60643 00:05:54.201 08:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:54.201 08:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 60643' 00:05:54.201 08:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 60643 00:05:54.201 08:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 60643 00:05:54.459 00:05:54.459 real 0m4.331s 00:05:54.459 user 0m4.692s 00:05:54.459 sys 0m1.146s 00:05:54.459 08:01:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:54.459 08:01:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.459 ************************************ 00:05:54.459 END TEST non_locking_app_on_locked_coremask 00:05:54.459 ************************************ 00:05:54.459 08:01:16 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:54.459 08:01:16 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:54.459 08:01:16 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:54.459 08:01:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.459 ************************************ 00:05:54.459 START TEST locking_app_on_unlocked_coremask 00:05:54.459 ************************************ 00:05:54.459 08:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:05:54.459 08:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60715 00:05:54.459 08:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60715 /var/tmp/spdk.sock 00:05:54.459 08:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:54.459 08:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 60715 ']' 00:05:54.459 08:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.459 08:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:54.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.459 08:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.459 08:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:54.459 08:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.738 [2024-06-10 08:01:16.386406] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:05:54.738 [2024-06-10 08:01:16.386519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60715 ] 00:05:54.738 [2024-06-10 08:01:16.527459] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.738 [2024-06-10 08:01:16.527525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.996 [2024-06-10 08:01:16.679192] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.996 [2024-06-10 08:01:16.734780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:55.560 08:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:55.560 08:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:55.560 08:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60731 00:05:55.560 08:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.560 08:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60731 /var/tmp/spdk2.sock 00:05:55.560 08:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 60731 ']' 00:05:55.560 08:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.560 08:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:55.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.560 08:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.560 08:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:55.560 08:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.560 [2024-06-10 08:01:17.425682] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:05:55.560 [2024-06-10 08:01:17.425832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60731 ] 00:05:55.817 [2024-06-10 08:01:17.577589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.075 [2024-06-10 08:01:17.875886] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.333 [2024-06-10 08:01:18.021225] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:56.899 08:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:56.899 08:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:56.899 08:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60731 00:05:56.899 08:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60731 00:05:56.899 08:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.465 08:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60715 00:05:57.465 08:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 60715 ']' 00:05:57.465 08:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 60715 00:05:57.465 08:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:57.465 08:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:57.465 08:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 60715 00:05:57.465 08:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:57.465 08:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:57.465 killing process with pid 60715 00:05:57.465 08:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 60715' 00:05:57.465 08:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 60715 00:05:57.465 08:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 60715 00:05:58.396 08:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60731 00:05:58.396 08:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 60731 ']' 00:05:58.396 08:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 60731 00:05:58.396 08:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:58.396 08:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:58.396 08:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 60731 00:05:58.396 08:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:58.396 08:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:58.396 killing process with pid 60731 00:05:58.396 08:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 60731' 00:05:58.396 08:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 60731 00:05:58.396 08:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 60731 00:05:58.966 00:05:58.966 real 0m4.406s 00:05:58.966 user 0m4.784s 00:05:58.966 sys 0m1.189s 00:05:58.966 08:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:58.966 08:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.966 ************************************ 00:05:58.966 END TEST locking_app_on_unlocked_coremask 00:05:58.966 ************************************ 00:05:58.966 08:01:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:58.966 08:01:20 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:58.966 08:01:20 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:58.966 08:01:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.966 ************************************ 00:05:58.966 START TEST locking_app_on_locked_coremask 00:05:58.966 ************************************ 00:05:58.966 08:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:05:58.966 08:01:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60798 00:05:58.966 08:01:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.966 08:01:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60798 /var/tmp/spdk.sock 00:05:58.966 08:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 60798 ']' 00:05:58.966 08:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.966 08:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:58.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.966 08:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.966 08:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:58.966 08:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.223 [2024-06-10 08:01:20.838248] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:05:59.224 [2024-06-10 08:01:20.838349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60798 ] 00:05:59.224 [2024-06-10 08:01:20.971490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.480 [2024-06-10 08:01:21.114577] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.480 [2024-06-10 08:01:21.187315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:00.044 08:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:00.044 08:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:00.044 08:01:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60814 00:06:00.044 08:01:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:00.044 08:01:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60814 /var/tmp/spdk2.sock 00:06:00.044 08:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:00.044 08:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 60814 /var/tmp/spdk2.sock 00:06:00.044 08:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:00.044 08:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:00.044 08:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:00.044 08:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:00.044 08:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 60814 /var/tmp/spdk2.sock 00:06:00.044 08:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 60814 ']' 00:06:00.044 08:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.044 08:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:00.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.044 08:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.044 08:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:00.044 08:01:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.044 [2024-06-10 08:01:21.880144] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:00.044 [2024-06-10 08:01:21.880301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60814 ] 00:06:00.302 [2024-06-10 08:01:22.023264] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60798 has claimed it. 00:06:00.302 [2024-06-10 08:01:22.023338] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:00.864 ERROR: process (pid: 60814) is no longer running 00:06:00.864 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 845: kill: (60814) - No such process 00:06:00.864 08:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:00.864 08:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:06:00.864 08:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:00.864 08:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:00.864 08:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:00.864 08:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:00.864 08:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60798 00:06:00.864 08:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60798 00:06:00.864 08:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.428 08:01:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60798 00:06:01.428 08:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 60798 ']' 00:06:01.429 08:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 60798 00:06:01.429 08:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:01.429 08:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:01.429 08:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 60798 00:06:01.429 08:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:01.429 08:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:01.429 killing process with pid 60798 00:06:01.429 08:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 60798' 00:06:01.429 08:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 60798 00:06:01.429 08:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 60798 00:06:01.995 00:06:01.995 real 0m2.779s 00:06:01.995 user 0m3.124s 00:06:01.995 sys 0m0.709s 00:06:01.995 08:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:01.995 08:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.995 ************************************ 00:06:01.995 END TEST locking_app_on_locked_coremask 00:06:01.995 ************************************ 00:06:01.995 08:01:23 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:01.995 08:01:23 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:01.995 08:01:23 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:01.995 08:01:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.995 ************************************ 00:06:01.995 START TEST locking_overlapped_coremask 00:06:01.995 ************************************ 00:06:01.995 08:01:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:06:01.995 08:01:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60865 00:06:01.995 08:01:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60865 /var/tmp/spdk.sock 00:06:01.995 08:01:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 60865 ']' 00:06:01.995 08:01:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:01.995 08:01:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.995 08:01:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:01.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.995 08:01:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.995 08:01:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:01.995 08:01:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.995 [2024-06-10 08:01:23.668097] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:01.995 [2024-06-10 08:01:23.668223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60865 ] 00:06:01.995 [2024-06-10 08:01:23.806739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.254 [2024-06-10 08:01:23.955652] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.254 [2024-06-10 08:01:23.955812] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.254 [2024-06-10 08:01:23.955803] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.254 [2024-06-10 08:01:24.027887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:02.819 08:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:02.819 08:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:02.819 08:01:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60883 00:06:02.819 08:01:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60883 /var/tmp/spdk2.sock 00:06:02.819 08:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:02.819 08:01:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:02.819 08:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 60883 /var/tmp/spdk2.sock 00:06:02.819 08:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:02.819 08:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:02.819 08:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:02.819 08:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:02.819 08:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 60883 /var/tmp/spdk2.sock 00:06:02.819 08:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 60883 ']' 00:06:02.819 08:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.819 08:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:02.819 08:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.819 08:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:02.819 08:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.077 [2024-06-10 08:01:24.749831] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:03.077 [2024-06-10 08:01:24.750006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60883 ] 00:06:03.077 [2024-06-10 08:01:24.898512] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60865 has claimed it. 00:06:03.077 [2024-06-10 08:01:24.898589] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:03.642 ERROR: process (pid: 60883) is no longer running 00:06:03.642 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 845: kill: (60883) - No such process 00:06:03.642 08:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:03.642 08:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:06:03.642 08:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:03.642 08:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:03.642 08:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:03.642 08:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:03.642 08:01:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:03.642 08:01:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:03.642 08:01:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:03.642 08:01:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:03.642 08:01:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60865 00:06:03.642 08:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 60865 ']' 00:06:03.642 08:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 60865 00:06:03.642 08:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:06:03.642 08:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:03.642 08:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 60865 00:06:03.642 killing process with pid 60865 00:06:03.642 08:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:03.643 08:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:03.643 08:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 60865' 00:06:03.643 08:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 60865 00:06:03.643 08:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 60865 00:06:04.207 ************************************ 00:06:04.208 END TEST locking_overlapped_coremask 00:06:04.208 ************************************ 00:06:04.208 00:06:04.208 real 0m2.425s 00:06:04.208 user 0m6.644s 00:06:04.208 sys 0m0.490s 00:06:04.208 08:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:04.208 08:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.208 08:01:26 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:04.208 08:01:26 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:04.208 08:01:26 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:04.208 08:01:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.466 ************************************ 00:06:04.466 START TEST locking_overlapped_coremask_via_rpc 00:06:04.466 ************************************ 00:06:04.466 08:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:06:04.466 08:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60923 00:06:04.466 08:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60923 /var/tmp/spdk.sock 00:06:04.466 08:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:04.466 08:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 60923 ']' 00:06:04.466 08:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.466 08:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:04.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.466 08:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.466 08:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:04.466 08:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.466 [2024-06-10 08:01:26.141918] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:04.466 [2024-06-10 08:01:26.142031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60923 ] 00:06:04.466 [2024-06-10 08:01:26.279745] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.466 [2024-06-10 08:01:26.279816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.724 [2024-06-10 08:01:26.426309] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.724 [2024-06-10 08:01:26.426406] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.724 [2024-06-10 08:01:26.426412] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.724 [2024-06-10 08:01:26.497423] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:05.289 08:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:05.289 08:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:05.289 08:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60941 00:06:05.289 08:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60941 /var/tmp/spdk2.sock 00:06:05.289 08:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:05.289 08:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 60941 ']' 00:06:05.289 08:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.289 08:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:05.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.289 08:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.289 08:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:05.289 08:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.547 [2024-06-10 08:01:27.208692] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:05.547 [2024-06-10 08:01:27.208851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60941 ] 00:06:05.547 [2024-06-10 08:01:27.361455] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.547 [2024-06-10 08:01:27.361512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.831 [2024-06-10 08:01:27.641431] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.831 [2024-06-10 08:01:27.641561] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.831 [2024-06-10 08:01:27.641562] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:06:06.120 [2024-06-10 08:01:27.745185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:06.378 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:06.378 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:06.378 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:06.378 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:06.378 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.636 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:06.636 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.636 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:06:06.636 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.636 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:06:06.636 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:06.636 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:06:06.636 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:06.636 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.636 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:06.636 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.636 [2024-06-10 08:01:28.266918] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60923 has claimed it. 00:06:06.636 request: 00:06:06.636 { 00:06:06.636 "method": "framework_enable_cpumask_locks", 00:06:06.636 "req_id": 1 00:06:06.636 } 00:06:06.636 Got JSON-RPC error response 00:06:06.636 response: 00:06:06.636 { 00:06:06.636 "code": -32603, 00:06:06.636 "message": "Failed to claim CPU core: 2" 00:06:06.636 } 00:06:06.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.636 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:06.636 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:06:06.636 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:06.636 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:06.636 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:06.637 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60923 /var/tmp/spdk.sock 00:06:06.637 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 60923 ']' 00:06:06.637 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.637 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:06.637 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.637 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:06.637 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.894 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:06.894 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:06.895 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60941 /var/tmp/spdk2.sock 00:06:06.895 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 60941 ']' 00:06:06.895 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.895 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:06.895 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.895 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:06.895 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.153 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:07.153 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:07.153 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:07.153 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:07.153 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:07.153 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:07.153 00:06:07.153 real 0m2.787s 00:06:07.153 user 0m1.496s 00:06:07.153 sys 0m0.204s 00:06:07.153 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:07.153 08:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.153 ************************************ 00:06:07.153 END TEST locking_overlapped_coremask_via_rpc 00:06:07.153 ************************************ 00:06:07.153 08:01:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:07.153 08:01:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60923 ]] 00:06:07.153 08:01:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60923 00:06:07.153 08:01:28 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 60923 ']' 00:06:07.153 08:01:28 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 60923 00:06:07.153 08:01:28 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:07.153 08:01:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:07.153 08:01:28 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 60923 00:06:07.153 killing process with pid 60923 00:06:07.153 08:01:28 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:07.153 08:01:28 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:07.153 08:01:28 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 60923' 00:06:07.153 08:01:28 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 60923 00:06:07.153 08:01:28 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 60923 00:06:07.719 08:01:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60941 ]] 00:06:07.719 08:01:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60941 00:06:07.719 08:01:29 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 60941 ']' 00:06:07.719 08:01:29 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 60941 00:06:07.719 08:01:29 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:07.719 08:01:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:07.719 08:01:29 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 60941 00:06:07.719 killing process with pid 60941 00:06:07.719 08:01:29 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:06:07.719 08:01:29 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:06:07.719 08:01:29 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 60941' 00:06:07.719 08:01:29 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 60941 00:06:07.719 08:01:29 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 60941 00:06:08.284 08:01:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:08.284 08:01:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:08.284 08:01:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60923 ]] 00:06:08.284 08:01:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60923 00:06:08.284 08:01:29 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 60923 ']' 00:06:08.284 Process with pid 60923 is not found 00:06:08.284 Process with pid 60941 is not found 00:06:08.284 08:01:29 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 60923 00:06:08.285 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 953: kill: (60923) - No such process 00:06:08.285 08:01:29 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 60923 is not found' 00:06:08.285 08:01:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60941 ]] 00:06:08.285 08:01:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60941 00:06:08.285 08:01:29 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 60941 ']' 00:06:08.285 08:01:29 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 60941 00:06:08.285 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 953: kill: (60941) - No such process 00:06:08.285 08:01:29 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 60941 is not found' 00:06:08.285 08:01:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:08.285 ************************************ 00:06:08.285 END TEST cpu_locks 00:06:08.285 ************************************ 00:06:08.285 00:06:08.285 real 0m22.123s 00:06:08.285 user 0m38.320s 00:06:08.285 sys 0m5.869s 00:06:08.285 08:01:29 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:08.285 08:01:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.285 ************************************ 00:06:08.285 END TEST event 00:06:08.285 ************************************ 00:06:08.285 00:06:08.285 real 0m51.138s 00:06:08.285 user 1m37.590s 00:06:08.285 sys 0m9.998s 00:06:08.285 08:01:29 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:08.285 08:01:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.285 08:01:29 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:08.285 08:01:29 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:08.285 08:01:29 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:08.285 08:01:29 -- common/autotest_common.sh@10 -- # set +x 00:06:08.285 ************************************ 00:06:08.285 START TEST thread 00:06:08.285 ************************************ 00:06:08.285 08:01:29 thread -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:08.285 * Looking for test storage... 00:06:08.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:08.285 08:01:30 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:08.285 08:01:30 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:08.285 08:01:30 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:08.285 08:01:30 thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.285 ************************************ 00:06:08.285 START TEST thread_poller_perf 00:06:08.285 ************************************ 00:06:08.285 08:01:30 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:08.285 [2024-06-10 08:01:30.097739] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:08.285 [2024-06-10 08:01:30.097896] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61069 ] 00:06:08.543 [2024-06-10 08:01:30.241028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.543 [2024-06-10 08:01:30.380808] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.543 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:09.917 ====================================== 00:06:09.917 busy:2206987043 (cyc) 00:06:09.917 total_run_count: 317000 00:06:09.917 tsc_hz: 2200000000 (cyc) 00:06:09.917 ====================================== 00:06:09.917 poller_cost: 6962 (cyc), 3164 (nsec) 00:06:09.917 00:06:09.917 real 0m1.419s 00:06:09.917 user 0m1.232s 00:06:09.917 sys 0m0.080s 00:06:09.917 08:01:31 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:09.917 08:01:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.917 ************************************ 00:06:09.917 END TEST thread_poller_perf 00:06:09.917 ************************************ 00:06:09.917 08:01:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:09.917 08:01:31 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:09.917 08:01:31 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:09.917 08:01:31 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.917 ************************************ 00:06:09.917 START TEST thread_poller_perf 00:06:09.917 ************************************ 00:06:09.917 08:01:31 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:09.917 [2024-06-10 08:01:31.571898] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:09.917 [2024-06-10 08:01:31.572260] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61105 ] 00:06:09.917 [2024-06-10 08:01:31.710368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.175 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:10.175 [2024-06-10 08:01:31.846646] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.108 ====================================== 00:06:11.108 busy:2202204362 (cyc) 00:06:11.108 total_run_count: 4228000 00:06:11.108 tsc_hz: 2200000000 (cyc) 00:06:11.108 ====================================== 00:06:11.108 poller_cost: 520 (cyc), 236 (nsec) 00:06:11.108 ************************************ 00:06:11.108 END TEST thread_poller_perf 00:06:11.108 ************************************ 00:06:11.108 00:06:11.108 real 0m1.408s 00:06:11.108 user 0m1.233s 00:06:11.108 sys 0m0.066s 00:06:11.108 08:01:32 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:11.108 08:01:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.367 08:01:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:11.367 00:06:11.367 real 0m3.011s 00:06:11.367 user 0m2.532s 00:06:11.367 sys 0m0.261s 00:06:11.367 08:01:33 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:11.367 08:01:33 thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.367 ************************************ 00:06:11.367 END TEST thread 00:06:11.367 ************************************ 00:06:11.367 08:01:33 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:11.367 08:01:33 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:11.367 08:01:33 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:11.367 08:01:33 -- common/autotest_common.sh@10 -- # set +x 00:06:11.367 ************************************ 00:06:11.367 START TEST accel 00:06:11.367 ************************************ 00:06:11.367 08:01:33 accel -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:11.367 * Looking for test storage... 00:06:11.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:11.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.367 08:01:33 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:11.367 08:01:33 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:11.367 08:01:33 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:11.367 08:01:33 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=61179 00:06:11.367 08:01:33 accel -- accel/accel.sh@63 -- # waitforlisten 61179 00:06:11.367 08:01:33 accel -- common/autotest_common.sh@830 -- # '[' -z 61179 ']' 00:06:11.367 08:01:33 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.367 08:01:33 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:11.367 08:01:33 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.367 08:01:33 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:11.367 08:01:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.367 08:01:33 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:11.367 08:01:33 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:11.367 08:01:33 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.367 08:01:33 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.367 08:01:33 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.367 08:01:33 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.367 08:01:33 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.367 08:01:33 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:11.367 08:01:33 accel -- accel/accel.sh@41 -- # jq -r . 00:06:11.367 [2024-06-10 08:01:33.191696] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:11.367 [2024-06-10 08:01:33.192114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61179 ] 00:06:11.625 [2024-06-10 08:01:33.332977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.625 [2024-06-10 08:01:33.486181] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.884 [2024-06-10 08:01:33.544957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:12.451 08:01:34 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:12.451 08:01:34 accel -- common/autotest_common.sh@863 -- # return 0 00:06:12.451 08:01:34 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:12.451 08:01:34 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:12.451 08:01:34 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:12.451 08:01:34 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:12.451 08:01:34 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:12.451 08:01:34 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:12.451 08:01:34 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:12.451 08:01:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.451 08:01:34 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:12.451 08:01:34 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:12.451 08:01:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.451 08:01:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.451 08:01:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.451 08:01:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.451 08:01:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.451 08:01:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.451 08:01:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.451 08:01:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.451 08:01:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.451 08:01:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.451 08:01:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.451 08:01:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.451 08:01:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.451 08:01:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.451 08:01:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.451 08:01:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.452 08:01:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.452 08:01:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.452 08:01:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.452 08:01:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.452 08:01:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.452 08:01:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.452 08:01:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.452 08:01:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.452 08:01:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.452 08:01:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.452 08:01:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.452 08:01:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.452 08:01:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.452 08:01:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.452 08:01:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.452 08:01:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.452 08:01:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.452 08:01:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.452 08:01:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.452 08:01:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.452 08:01:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.452 08:01:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.452 08:01:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.452 08:01:34 accel -- accel/accel.sh@75 -- # killprocess 61179 00:06:12.452 08:01:34 accel -- common/autotest_common.sh@949 -- # '[' -z 61179 ']' 00:06:12.452 08:01:34 accel -- common/autotest_common.sh@953 -- # kill -0 61179 00:06:12.452 08:01:34 accel -- common/autotest_common.sh@954 -- # uname 00:06:12.452 08:01:34 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:12.452 08:01:34 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 61179 00:06:12.452 08:01:34 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:12.452 08:01:34 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:12.452 08:01:34 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 61179' 00:06:12.452 killing process with pid 61179 00:06:12.452 08:01:34 accel -- common/autotest_common.sh@968 -- # kill 61179 00:06:12.452 08:01:34 accel -- common/autotest_common.sh@973 -- # wait 61179 00:06:13.018 08:01:34 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:13.018 08:01:34 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:13.018 08:01:34 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:13.018 08:01:34 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:13.018 08:01:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.018 08:01:34 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:06:13.018 08:01:34 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:13.018 08:01:34 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:13.018 08:01:34 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.018 08:01:34 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.018 08:01:34 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.018 08:01:34 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.018 08:01:34 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.018 08:01:34 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:13.018 08:01:34 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:13.018 08:01:34 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:13.018 08:01:34 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:13.018 08:01:34 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:13.018 08:01:34 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:13.018 08:01:34 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:13.018 08:01:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.019 ************************************ 00:06:13.019 START TEST accel_missing_filename 00:06:13.019 ************************************ 00:06:13.019 08:01:34 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:06:13.019 08:01:34 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:06:13.019 08:01:34 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:13.019 08:01:34 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:13.019 08:01:34 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:13.019 08:01:34 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:13.019 08:01:34 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:13.019 08:01:34 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:06:13.019 08:01:34 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:13.019 08:01:34 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:13.019 08:01:34 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.019 08:01:34 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.019 08:01:34 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.019 08:01:34 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.019 08:01:34 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.019 08:01:34 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:13.019 08:01:34 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:13.019 [2024-06-10 08:01:34.703047] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:13.019 [2024-06-10 08:01:34.703155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61231 ] 00:06:13.019 [2024-06-10 08:01:34.838934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.276 [2024-06-10 08:01:34.976449] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.276 [2024-06-10 08:01:35.030856] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:13.276 [2024-06-10 08:01:35.105903] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:13.534 A filename is required. 00:06:13.534 08:01:35 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:06:13.534 08:01:35 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:13.534 08:01:35 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:06:13.534 08:01:35 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:06:13.534 08:01:35 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:06:13.534 08:01:35 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:13.534 00:06:13.534 real 0m0.540s 00:06:13.534 user 0m0.362s 00:06:13.534 sys 0m0.119s 00:06:13.534 08:01:35 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:13.534 08:01:35 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:13.534 ************************************ 00:06:13.534 END TEST accel_missing_filename 00:06:13.534 ************************************ 00:06:13.534 08:01:35 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:13.534 08:01:35 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:13.534 08:01:35 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:13.534 08:01:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.534 ************************************ 00:06:13.534 START TEST accel_compress_verify 00:06:13.534 ************************************ 00:06:13.534 08:01:35 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:13.534 08:01:35 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:06:13.534 08:01:35 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:13.534 08:01:35 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:13.534 08:01:35 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:13.534 08:01:35 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:13.534 08:01:35 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:13.534 08:01:35 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:13.534 08:01:35 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:13.534 08:01:35 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:13.534 08:01:35 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.534 08:01:35 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.534 08:01:35 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.534 08:01:35 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.534 08:01:35 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.534 08:01:35 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:13.534 08:01:35 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:13.534 [2024-06-10 08:01:35.291732] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:13.534 [2024-06-10 08:01:35.291851] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61255 ] 00:06:13.792 [2024-06-10 08:01:35.426004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.792 [2024-06-10 08:01:35.566031] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.792 [2024-06-10 08:01:35.620337] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.050 [2024-06-10 08:01:35.696118] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:14.050 00:06:14.050 Compression does not support the verify option, aborting. 00:06:14.050 08:01:35 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:06:14.050 08:01:35 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:14.050 08:01:35 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:06:14.050 ************************************ 00:06:14.050 END TEST accel_compress_verify 00:06:14.050 ************************************ 00:06:14.050 08:01:35 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:06:14.050 08:01:35 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:06:14.050 08:01:35 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:14.050 00:06:14.050 real 0m0.545s 00:06:14.050 user 0m0.365s 00:06:14.050 sys 0m0.123s 00:06:14.050 08:01:35 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:14.050 08:01:35 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:14.050 08:01:35 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:14.050 08:01:35 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:14.050 08:01:35 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:14.050 08:01:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.050 ************************************ 00:06:14.050 START TEST accel_wrong_workload 00:06:14.050 ************************************ 00:06:14.050 08:01:35 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:06:14.050 08:01:35 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:06:14.050 08:01:35 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:14.050 08:01:35 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:14.050 08:01:35 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:14.050 08:01:35 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:14.050 08:01:35 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:14.050 08:01:35 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:06:14.050 08:01:35 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:14.050 08:01:35 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:14.050 08:01:35 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.050 08:01:35 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.050 08:01:35 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.050 08:01:35 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.050 08:01:35 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.050 08:01:35 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:14.050 08:01:35 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:14.050 Unsupported workload type: foobar 00:06:14.050 [2024-06-10 08:01:35.888412] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:14.050 accel_perf options: 00:06:14.050 [-h help message] 00:06:14.050 [-q queue depth per core] 00:06:14.050 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:14.050 [-T number of threads per core 00:06:14.050 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:14.050 [-t time in seconds] 00:06:14.050 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:14.050 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:14.050 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:14.050 [-l for compress/decompress workloads, name of uncompressed input file 00:06:14.050 [-S for crc32c workload, use this seed value (default 0) 00:06:14.050 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:14.050 [-f for fill workload, use this BYTE value (default 255) 00:06:14.050 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:14.050 [-y verify result if this switch is on] 00:06:14.050 [-a tasks to allocate per core (default: same value as -q)] 00:06:14.050 Can be used to spread operations across a wider range of memory. 00:06:14.050 08:01:35 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:06:14.050 08:01:35 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:14.050 08:01:35 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:14.050 ************************************ 00:06:14.050 END TEST accel_wrong_workload 00:06:14.050 ************************************ 00:06:14.051 08:01:35 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:14.051 00:06:14.051 real 0m0.034s 00:06:14.051 user 0m0.022s 00:06:14.051 sys 0m0.012s 00:06:14.051 08:01:35 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:14.051 08:01:35 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:14.309 08:01:35 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:14.309 08:01:35 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:14.309 08:01:35 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:14.309 08:01:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.309 ************************************ 00:06:14.309 START TEST accel_negative_buffers 00:06:14.309 ************************************ 00:06:14.309 08:01:35 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:14.309 08:01:35 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:06:14.309 08:01:35 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:14.309 08:01:35 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:14.309 08:01:35 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:14.309 08:01:35 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:14.309 08:01:35 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:14.309 08:01:35 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:06:14.309 08:01:35 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:14.309 08:01:35 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:14.309 08:01:35 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.309 08:01:35 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.309 08:01:35 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.309 08:01:35 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.309 08:01:35 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.309 08:01:35 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:14.309 08:01:35 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:14.309 -x option must be non-negative. 00:06:14.309 [2024-06-10 08:01:35.978132] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:14.309 accel_perf options: 00:06:14.309 [-h help message] 00:06:14.309 [-q queue depth per core] 00:06:14.309 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:14.309 [-T number of threads per core 00:06:14.309 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:14.309 [-t time in seconds] 00:06:14.309 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:14.309 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:14.309 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:14.309 [-l for compress/decompress workloads, name of uncompressed input file 00:06:14.309 [-S for crc32c workload, use this seed value (default 0) 00:06:14.309 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:14.309 [-f for fill workload, use this BYTE value (default 255) 00:06:14.309 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:14.309 [-y verify result if this switch is on] 00:06:14.309 [-a tasks to allocate per core (default: same value as -q)] 00:06:14.309 Can be used to spread operations across a wider range of memory. 00:06:14.309 ************************************ 00:06:14.309 END TEST accel_negative_buffers 00:06:14.309 ************************************ 00:06:14.309 08:01:35 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:06:14.309 08:01:35 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:14.309 08:01:35 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:14.309 08:01:35 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:14.309 00:06:14.309 real 0m0.038s 00:06:14.309 user 0m0.023s 00:06:14.309 sys 0m0.014s 00:06:14.309 08:01:35 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:14.309 08:01:35 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:14.310 08:01:36 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:14.310 08:01:36 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:14.310 08:01:36 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:14.310 08:01:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.310 ************************************ 00:06:14.310 START TEST accel_crc32c 00:06:14.310 ************************************ 00:06:14.310 08:01:36 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:14.310 08:01:36 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:14.310 08:01:36 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:14.310 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.310 08:01:36 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:14.310 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.310 08:01:36 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:14.310 08:01:36 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:14.310 08:01:36 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.310 08:01:36 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.310 08:01:36 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.310 08:01:36 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.310 08:01:36 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.310 08:01:36 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:14.310 08:01:36 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:14.310 [2024-06-10 08:01:36.063862] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:14.310 [2024-06-10 08:01:36.063995] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61314 ] 00:06:14.568 [2024-06-10 08:01:36.206417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.568 [2024-06-10 08:01:36.344447] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.568 08:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:15.943 08:01:37 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.943 00:06:15.943 real 0m1.552s 00:06:15.943 user 0m1.332s 00:06:15.943 sys 0m0.125s 00:06:15.943 ************************************ 00:06:15.943 END TEST accel_crc32c 00:06:15.943 ************************************ 00:06:15.943 08:01:37 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.943 08:01:37 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:15.943 08:01:37 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:15.943 08:01:37 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:15.943 08:01:37 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:15.943 08:01:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.943 ************************************ 00:06:15.943 START TEST accel_crc32c_C2 00:06:15.943 ************************************ 00:06:15.943 08:01:37 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:15.943 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.943 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:15.943 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.943 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.943 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:15.943 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:15.943 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.943 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.943 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.943 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.943 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.943 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.943 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:15.943 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:15.943 [2024-06-10 08:01:37.663535] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:15.943 [2024-06-10 08:01:37.663645] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61354 ] 00:06:15.943 [2024-06-10 08:01:37.801883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.201 [2024-06-10 08:01:37.940937] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:16.201 08:01:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.201 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.201 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:16.201 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.201 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.201 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.201 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.201 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.201 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.201 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.201 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.202 08:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.576 ************************************ 00:06:17.576 END TEST accel_crc32c_C2 00:06:17.576 ************************************ 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.576 00:06:17.576 real 0m1.555s 00:06:17.576 user 0m1.339s 00:06:17.576 sys 0m0.121s 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:17.576 08:01:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:17.576 08:01:39 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:17.576 08:01:39 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:17.576 08:01:39 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:17.576 08:01:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.576 ************************************ 00:06:17.576 START TEST accel_copy 00:06:17.576 ************************************ 00:06:17.576 08:01:39 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:06:17.576 08:01:39 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:17.576 08:01:39 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:17.576 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.576 08:01:39 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:17.576 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.577 08:01:39 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:17.577 08:01:39 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:17.577 08:01:39 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.577 08:01:39 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.577 08:01:39 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.577 08:01:39 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.577 08:01:39 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.577 08:01:39 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:17.577 08:01:39 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:17.577 [2024-06-10 08:01:39.272440] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:17.577 [2024-06-10 08:01:39.272527] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61383 ] 00:06:17.577 [2024-06-10 08:01:39.405477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.835 [2024-06-10 08:01:39.545037] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.835 08:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:19.236 08:01:40 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.236 00:06:19.236 real 0m1.548s 00:06:19.236 user 0m1.335s 00:06:19.236 sys 0m0.119s 00:06:19.236 08:01:40 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:19.236 ************************************ 00:06:19.236 END TEST accel_copy 00:06:19.236 ************************************ 00:06:19.236 08:01:40 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:19.236 08:01:40 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.236 08:01:40 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:19.236 08:01:40 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:19.236 08:01:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.236 ************************************ 00:06:19.236 START TEST accel_fill 00:06:19.236 ************************************ 00:06:19.236 08:01:40 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.236 08:01:40 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:19.236 08:01:40 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:19.236 08:01:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.236 08:01:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.236 08:01:40 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.236 08:01:40 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:19.236 08:01:40 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.236 08:01:40 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.236 08:01:40 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.236 08:01:40 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.237 08:01:40 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.237 08:01:40 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.237 08:01:40 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:19.237 08:01:40 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:19.237 [2024-06-10 08:01:40.878172] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:19.237 [2024-06-10 08:01:40.878273] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61423 ] 00:06:19.237 [2024-06-10 08:01:41.015669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.495 [2024-06-10 08:01:41.163100] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.495 08:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.881 08:01:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.881 08:01:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.881 08:01:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.881 08:01:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.881 08:01:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.881 08:01:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.881 08:01:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.881 08:01:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.881 08:01:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.881 08:01:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.881 08:01:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.881 08:01:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.882 08:01:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.882 08:01:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.882 08:01:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.882 08:01:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.882 08:01:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.882 08:01:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.882 08:01:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.882 08:01:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.882 08:01:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.882 ************************************ 00:06:20.882 END TEST accel_fill 00:06:20.882 ************************************ 00:06:20.882 08:01:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.882 08:01:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.882 08:01:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.882 08:01:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.882 08:01:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:20.882 08:01:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.882 00:06:20.882 real 0m1.565s 00:06:20.882 user 0m1.351s 00:06:20.882 sys 0m0.120s 00:06:20.882 08:01:42 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:20.882 08:01:42 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:20.882 08:01:42 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:20.882 08:01:42 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:20.882 08:01:42 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:20.882 08:01:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.882 ************************************ 00:06:20.882 START TEST accel_copy_crc32c 00:06:20.882 ************************************ 00:06:20.882 08:01:42 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:06:20.882 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:20.882 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:20.882 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.882 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:20.882 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.882 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:20.882 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:20.882 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.882 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.882 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.882 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.882 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.882 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:20.882 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:20.882 [2024-06-10 08:01:42.489220] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:20.882 [2024-06-10 08:01:42.489326] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61456 ] 00:06:20.882 [2024-06-10 08:01:42.628444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.141 [2024-06-10 08:01:42.782645] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.141 08:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.518 ************************************ 00:06:22.518 END TEST accel_copy_crc32c 00:06:22.518 ************************************ 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.518 00:06:22.518 real 0m1.577s 00:06:22.518 user 0m1.354s 00:06:22.518 sys 0m0.131s 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:22.518 08:01:44 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:22.518 08:01:44 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:22.518 08:01:44 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:22.518 08:01:44 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:22.518 08:01:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.518 ************************************ 00:06:22.518 START TEST accel_copy_crc32c_C2 00:06:22.518 ************************************ 00:06:22.518 08:01:44 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:22.518 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.518 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:22.518 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.518 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.518 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:22.518 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:22.518 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.518 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.518 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.518 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.518 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.518 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.518 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:22.518 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:22.518 [2024-06-10 08:01:44.113227] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:22.518 [2024-06-10 08:01:44.113317] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61492 ] 00:06:22.518 [2024-06-10 08:01:44.251341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.777 [2024-06-10 08:01:44.391503] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.777 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.778 08:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.154 00:06:24.154 real 0m1.555s 00:06:24.154 user 0m1.334s 00:06:24.154 sys 0m0.129s 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:24.154 08:01:45 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:24.154 ************************************ 00:06:24.154 END TEST accel_copy_crc32c_C2 00:06:24.154 ************************************ 00:06:24.154 08:01:45 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:24.154 08:01:45 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:24.154 08:01:45 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:24.154 08:01:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.154 ************************************ 00:06:24.154 START TEST accel_dualcast 00:06:24.154 ************************************ 00:06:24.154 08:01:45 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:06:24.154 08:01:45 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:24.154 08:01:45 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:24.154 08:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.154 08:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.154 08:01:45 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:24.154 08:01:45 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:24.154 08:01:45 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:24.154 08:01:45 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.154 08:01:45 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.154 08:01:45 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.154 08:01:45 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.154 08:01:45 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.154 08:01:45 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:24.154 08:01:45 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:24.154 [2024-06-10 08:01:45.719816] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:24.154 [2024-06-10 08:01:45.720601] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61532 ] 00:06:24.154 [2024-06-10 08:01:45.861375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.154 [2024-06-10 08:01:45.994631] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.413 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.414 08:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:24.414 08:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.414 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.414 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.414 08:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.414 08:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.414 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.414 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.414 08:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.414 08:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.414 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.414 08:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:25.792 08:01:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.792 00:06:25.792 real 0m1.560s 00:06:25.792 user 0m1.337s 00:06:25.792 sys 0m0.128s 00:06:25.792 08:01:47 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:25.792 ************************************ 00:06:25.792 END TEST accel_dualcast 00:06:25.792 ************************************ 00:06:25.792 08:01:47 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:25.792 08:01:47 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:25.792 08:01:47 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:25.792 08:01:47 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:25.792 08:01:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.792 ************************************ 00:06:25.792 START TEST accel_compare 00:06:25.792 ************************************ 00:06:25.792 08:01:47 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:06:25.792 08:01:47 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:25.792 08:01:47 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:25.792 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.792 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.792 08:01:47 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:25.792 08:01:47 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:25.792 08:01:47 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:25.792 08:01:47 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.792 08:01:47 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.792 08:01:47 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.792 08:01:47 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.792 08:01:47 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.792 08:01:47 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:25.792 08:01:47 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:25.792 [2024-06-10 08:01:47.334423] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:25.792 [2024-06-10 08:01:47.334518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61561 ] 00:06:25.792 [2024-06-10 08:01:47.474048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.792 [2024-06-10 08:01:47.614260] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.051 08:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:27.425 08:01:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.425 00:06:27.425 real 0m1.558s 00:06:27.425 user 0m1.336s 00:06:27.425 sys 0m0.130s 00:06:27.425 08:01:48 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:27.425 08:01:48 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:27.425 ************************************ 00:06:27.425 END TEST accel_compare 00:06:27.425 ************************************ 00:06:27.425 08:01:48 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:27.425 08:01:48 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:27.425 08:01:48 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:27.425 08:01:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.425 ************************************ 00:06:27.425 START TEST accel_xor 00:06:27.425 ************************************ 00:06:27.425 08:01:48 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:06:27.425 08:01:48 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:27.425 08:01:48 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:27.425 08:01:48 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:27.425 08:01:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.425 08:01:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.425 08:01:48 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:27.425 08:01:48 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:27.426 08:01:48 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.426 08:01:48 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.426 08:01:48 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.426 08:01:48 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.426 08:01:48 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.426 08:01:48 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:27.426 08:01:48 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:27.426 [2024-06-10 08:01:48.940645] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:27.426 [2024-06-10 08:01:48.940748] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61601 ] 00:06:27.426 [2024-06-10 08:01:49.077532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.426 [2024-06-10 08:01:49.196290] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.426 08:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.801 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.801 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.801 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.801 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.801 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.801 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.802 00:06:28.802 real 0m1.551s 00:06:28.802 user 0m1.329s 00:06:28.802 sys 0m0.128s 00:06:28.802 08:01:50 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:28.802 08:01:50 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:28.802 ************************************ 00:06:28.802 END TEST accel_xor 00:06:28.802 ************************************ 00:06:28.802 08:01:50 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:28.802 08:01:50 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:28.802 08:01:50 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:28.802 08:01:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.802 ************************************ 00:06:28.802 START TEST accel_xor 00:06:28.802 ************************************ 00:06:28.802 08:01:50 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:28.802 08:01:50 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:28.802 [2024-06-10 08:01:50.546262] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:28.802 [2024-06-10 08:01:50.546362] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61630 ] 00:06:29.090 [2024-06-10 08:01:50.685431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.090 [2024-06-10 08:01:50.797457] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.090 08:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.478 08:01:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.478 08:01:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.478 08:01:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.478 08:01:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.478 08:01:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.478 08:01:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.478 08:01:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.478 08:01:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.478 08:01:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.478 08:01:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.478 08:01:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.478 08:01:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.478 08:01:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.478 08:01:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.478 08:01:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.478 08:01:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.478 08:01:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.478 08:01:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.478 08:01:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.478 08:01:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.479 08:01:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.479 08:01:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.479 08:01:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.479 08:01:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.479 08:01:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.479 08:01:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:30.479 08:01:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.479 00:06:30.479 real 0m1.534s 00:06:30.479 user 0m1.304s 00:06:30.479 sys 0m0.136s 00:06:30.479 08:01:52 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:30.479 08:01:52 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:30.479 ************************************ 00:06:30.479 END TEST accel_xor 00:06:30.479 ************************************ 00:06:30.479 08:01:52 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:30.479 08:01:52 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:30.479 08:01:52 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:30.479 08:01:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.479 ************************************ 00:06:30.479 START TEST accel_dif_verify 00:06:30.479 ************************************ 00:06:30.479 08:01:52 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:06:30.479 08:01:52 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:30.479 08:01:52 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:30.479 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.479 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.479 08:01:52 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:30.479 08:01:52 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:30.479 08:01:52 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:30.479 08:01:52 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.479 08:01:52 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.479 08:01:52 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.479 08:01:52 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.479 08:01:52 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.479 08:01:52 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:30.479 08:01:52 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:30.479 [2024-06-10 08:01:52.137967] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:30.479 [2024-06-10 08:01:52.138093] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61670 ] 00:06:30.479 [2024-06-10 08:01:52.286822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.740 [2024-06-10 08:01:52.405652] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:30.740 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.741 08:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.117 ************************************ 00:06:32.117 END TEST accel_dif_verify 00:06:32.117 ************************************ 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.117 08:01:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.118 08:01:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.118 08:01:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.118 08:01:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:32.118 08:01:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.118 00:06:32.118 real 0m1.567s 00:06:32.118 user 0m1.327s 00:06:32.118 sys 0m0.146s 00:06:32.118 08:01:53 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:32.118 08:01:53 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:32.118 08:01:53 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:32.118 08:01:53 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:32.118 08:01:53 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:32.118 08:01:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.118 ************************************ 00:06:32.118 START TEST accel_dif_generate 00:06:32.118 ************************************ 00:06:32.118 08:01:53 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:06:32.118 08:01:53 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:32.118 08:01:53 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:32.118 08:01:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.118 08:01:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.118 08:01:53 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:32.118 08:01:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:32.118 08:01:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:32.118 08:01:53 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.118 08:01:53 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.118 08:01:53 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.118 08:01:53 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.118 08:01:53 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.118 08:01:53 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:32.118 08:01:53 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:32.118 [2024-06-10 08:01:53.752363] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:32.118 [2024-06-10 08:01:53.752455] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61701 ] 00:06:32.118 [2024-06-10 08:01:53.890413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.376 [2024-06-10 08:01:54.058261] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:32.376 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.377 08:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.778 ************************************ 00:06:33.778 END TEST accel_dif_generate 00:06:33.778 ************************************ 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:33.778 08:01:55 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.778 00:06:33.778 real 0m1.584s 00:06:33.778 user 0m1.355s 00:06:33.778 sys 0m0.137s 00:06:33.778 08:01:55 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:33.778 08:01:55 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:33.778 08:01:55 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:33.778 08:01:55 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:33.778 08:01:55 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:33.778 08:01:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.778 ************************************ 00:06:33.778 START TEST accel_dif_generate_copy 00:06:33.778 ************************************ 00:06:33.778 08:01:55 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:06:33.778 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:33.778 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:33.778 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.779 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:33.779 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.779 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:33.779 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:33.779 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.779 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.779 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.779 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.779 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.779 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:33.779 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:33.779 [2024-06-10 08:01:55.385151] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:33.779 [2024-06-10 08:01:55.385265] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61741 ] 00:06:33.779 [2024-06-10 08:01:55.519584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.037 [2024-06-10 08:01:55.652573] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.037 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.037 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.037 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.037 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.037 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.037 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.037 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.037 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.037 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:34.037 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.037 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.037 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.037 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.037 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.037 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.037 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.038 08:01:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.413 00:06:35.413 real 0m1.552s 00:06:35.413 user 0m1.335s 00:06:35.413 sys 0m0.123s 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:35.413 08:01:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:35.413 ************************************ 00:06:35.413 END TEST accel_dif_generate_copy 00:06:35.413 ************************************ 00:06:35.413 08:01:56 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:35.413 08:01:56 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:35.413 08:01:56 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:35.413 08:01:56 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:35.413 08:01:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.413 ************************************ 00:06:35.413 START TEST accel_comp 00:06:35.413 ************************************ 00:06:35.413 08:01:56 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:35.413 08:01:56 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:35.413 08:01:56 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:35.413 08:01:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.413 08:01:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.413 08:01:56 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:35.414 08:01:56 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:35.414 08:01:56 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:35.414 08:01:56 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.414 08:01:56 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.414 08:01:56 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.414 08:01:56 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.414 08:01:56 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.414 08:01:56 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:35.414 08:01:56 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:35.414 [2024-06-10 08:01:56.989990] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:35.414 [2024-06-10 08:01:56.990091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61776 ] 00:06:35.414 [2024-06-10 08:01:57.133969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.414 [2024-06-10 08:01:57.266983] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.672 08:01:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.047 08:01:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.047 08:01:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.047 08:01:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.047 08:01:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.047 08:01:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.047 08:01:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.047 08:01:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.047 08:01:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.047 08:01:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.047 08:01:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.047 08:01:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.047 08:01:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.047 08:01:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.047 08:01:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.047 08:01:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.047 08:01:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.048 08:01:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.048 08:01:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.048 08:01:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.048 08:01:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.048 08:01:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.048 08:01:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.048 08:01:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.048 08:01:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.048 08:01:58 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.048 08:01:58 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:37.048 08:01:58 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.048 00:06:37.048 real 0m1.569s 00:06:37.048 user 0m1.344s 00:06:37.048 sys 0m0.132s 00:06:37.048 08:01:58 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:37.048 08:01:58 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:37.048 ************************************ 00:06:37.048 END TEST accel_comp 00:06:37.048 ************************************ 00:06:37.048 08:01:58 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:37.048 08:01:58 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:37.048 08:01:58 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:37.048 08:01:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.048 ************************************ 00:06:37.048 START TEST accel_decomp 00:06:37.048 ************************************ 00:06:37.048 08:01:58 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:37.048 08:01:58 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:37.048 08:01:58 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:37.048 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.048 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.048 08:01:58 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:37.048 08:01:58 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:37.048 08:01:58 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:37.048 08:01:58 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.048 08:01:58 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.048 08:01:58 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.048 08:01:58 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.048 08:01:58 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.048 08:01:58 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:37.048 08:01:58 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:37.048 [2024-06-10 08:01:58.610043] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:37.048 [2024-06-10 08:01:58.610147] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61810 ] 00:06:37.048 [2024-06-10 08:01:58.749498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.048 [2024-06-10 08:01:58.856750] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.306 08:01:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.306 08:01:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.307 08:01:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:38.240 08:02:00 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.240 00:06:38.240 real 0m1.520s 00:06:38.240 user 0m1.304s 00:06:38.240 sys 0m0.126s 00:06:38.240 08:02:00 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:38.240 ************************************ 00:06:38.240 END TEST accel_decomp 00:06:38.240 ************************************ 00:06:38.240 08:02:00 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:38.499 08:02:00 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:38.499 08:02:00 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:38.499 08:02:00 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:38.499 08:02:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.499 ************************************ 00:06:38.499 START TEST accel_decomp_full 00:06:38.499 ************************************ 00:06:38.499 08:02:00 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:38.499 08:02:00 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:38.499 08:02:00 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:38.499 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.499 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.499 08:02:00 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:38.499 08:02:00 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:38.499 08:02:00 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:38.499 08:02:00 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.499 08:02:00 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.499 08:02:00 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.499 08:02:00 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.499 08:02:00 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.499 08:02:00 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:38.499 08:02:00 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:38.499 [2024-06-10 08:02:00.181349] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:38.499 [2024-06-10 08:02:00.181441] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61848 ] 00:06:38.499 [2024-06-10 08:02:00.321489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.758 [2024-06-10 08:02:00.433749] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.758 08:02:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.133 08:02:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.133 08:02:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.133 08:02:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:40.134 08:02:01 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.134 00:06:40.134 real 0m1.517s 00:06:40.134 user 0m1.298s 00:06:40.134 sys 0m0.126s 00:06:40.134 08:02:01 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:40.134 ************************************ 00:06:40.134 END TEST accel_decomp_full 00:06:40.134 ************************************ 00:06:40.134 08:02:01 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:40.134 08:02:01 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:40.134 08:02:01 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:40.134 08:02:01 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:40.134 08:02:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.134 ************************************ 00:06:40.134 START TEST accel_decomp_mcore 00:06:40.134 ************************************ 00:06:40.134 08:02:01 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:40.134 08:02:01 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:40.134 08:02:01 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:40.134 08:02:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.134 08:02:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.134 08:02:01 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:40.134 08:02:01 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:40.134 08:02:01 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:40.134 08:02:01 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.134 08:02:01 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.134 08:02:01 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.134 08:02:01 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.134 08:02:01 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.134 08:02:01 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:40.134 08:02:01 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:40.134 [2024-06-10 08:02:01.754743] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:40.134 [2024-06-10 08:02:01.754867] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61882 ] 00:06:40.134 [2024-06-10 08:02:01.891471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:40.392 [2024-06-10 08:02:02.042535] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.392 [2024-06-10 08:02:02.042673] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.392 [2024-06-10 08:02:02.042846] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.392 [2024-06-10 08:02:02.042926] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.393 08:02:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.768 00:06:41.768 real 0m1.574s 00:06:41.768 user 0m4.743s 00:06:41.768 sys 0m0.141s 00:06:41.768 ************************************ 00:06:41.768 END TEST accel_decomp_mcore 00:06:41.768 ************************************ 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:41.768 08:02:03 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:41.768 08:02:03 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:41.768 08:02:03 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:41.768 08:02:03 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:41.768 08:02:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.768 ************************************ 00:06:41.768 START TEST accel_decomp_full_mcore 00:06:41.768 ************************************ 00:06:41.768 08:02:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:41.768 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:41.768 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:41.768 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.768 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:41.768 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.768 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:41.768 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:41.768 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.768 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.768 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.768 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.768 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.768 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:41.768 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:41.768 [2024-06-10 08:02:03.372859] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:41.768 [2024-06-10 08:02:03.373104] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61920 ] 00:06:41.768 [2024-06-10 08:02:03.512029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.027 [2024-06-10 08:02:03.672394] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.027 [2024-06-10 08:02:03.672482] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.027 [2024-06-10 08:02:03.672614] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.027 [2024-06-10 08:02:03.672617] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.027 08:02:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.404 ************************************ 00:06:43.404 END TEST accel_decomp_full_mcore 00:06:43.404 ************************************ 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.404 00:06:43.404 real 0m1.603s 00:06:43.404 user 0m4.784s 00:06:43.404 sys 0m0.154s 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:43.404 08:02:04 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:43.404 08:02:04 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:43.404 08:02:04 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:43.404 08:02:04 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:43.404 08:02:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.404 ************************************ 00:06:43.404 START TEST accel_decomp_mthread 00:06:43.404 ************************************ 00:06:43.404 08:02:05 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:43.404 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:43.404 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:43.404 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.404 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.404 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:43.404 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:43.404 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:43.404 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.404 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.404 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.404 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.404 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.404 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:43.404 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:43.404 [2024-06-10 08:02:05.032128] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:43.404 [2024-06-10 08:02:05.032240] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61963 ] 00:06:43.404 [2024-06-10 08:02:05.171238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.664 [2024-06-10 08:02:05.326201] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.664 08:02:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.042 00:06:45.042 real 0m1.587s 00:06:45.042 user 0m1.363s 00:06:45.042 sys 0m0.132s 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:45.042 ************************************ 00:06:45.042 END TEST accel_decomp_mthread 00:06:45.042 ************************************ 00:06:45.042 08:02:06 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:45.042 08:02:06 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:45.042 08:02:06 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:45.042 08:02:06 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:45.042 08:02:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.042 ************************************ 00:06:45.042 START TEST accel_decomp_full_mthread 00:06:45.042 ************************************ 00:06:45.042 08:02:06 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:45.042 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:45.042 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:45.042 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.042 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:45.042 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.042 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:45.042 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:45.042 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.042 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.042 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.042 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.042 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.042 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:45.043 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:45.043 [2024-06-10 08:02:06.663564] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:45.043 [2024-06-10 08:02:06.663666] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61996 ] 00:06:45.043 [2024-06-10 08:02:06.802252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.302 [2024-06-10 08:02:06.930741] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.302 08:02:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:45.302 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.303 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.303 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.303 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.303 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.303 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.303 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.303 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.303 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.303 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.303 08:02:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.681 ************************************ 00:06:46.681 END TEST accel_decomp_full_mthread 00:06:46.681 ************************************ 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.681 00:06:46.681 real 0m1.576s 00:06:46.681 user 0m1.351s 00:06:46.681 sys 0m0.129s 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:46.681 08:02:08 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:46.681 08:02:08 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:46.681 08:02:08 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:46.681 08:02:08 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:46.681 08:02:08 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.681 08:02:08 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:46.681 08:02:08 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.681 08:02:08 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:46.681 08:02:08 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.681 08:02:08 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.681 08:02:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.681 08:02:08 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.681 08:02:08 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:46.681 08:02:08 accel -- accel/accel.sh@41 -- # jq -r . 00:06:46.681 ************************************ 00:06:46.681 START TEST accel_dif_functional_tests 00:06:46.681 ************************************ 00:06:46.681 08:02:08 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:46.681 [2024-06-10 08:02:08.322761] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:46.681 [2024-06-10 08:02:08.322892] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62033 ] 00:06:46.681 [2024-06-10 08:02:08.462100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.940 [2024-06-10 08:02:08.592942] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.940 [2024-06-10 08:02:08.593076] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.940 [2024-06-10 08:02:08.593080] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.940 [2024-06-10 08:02:08.648904] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.940 00:06:46.940 00:06:46.940 CUnit - A unit testing framework for C - Version 2.1-3 00:06:46.940 http://cunit.sourceforge.net/ 00:06:46.940 00:06:46.940 00:06:46.940 Suite: accel_dif 00:06:46.940 Test: verify: DIF generated, GUARD check ...passed 00:06:46.940 Test: verify: DIF generated, APPTAG check ...passed 00:06:46.940 Test: verify: DIF generated, REFTAG check ...passed 00:06:46.940 Test: verify: DIF not generated, GUARD check ...passed 00:06:46.940 Test: verify: DIF not generated, APPTAG check ...[2024-06-10 08:02:08.684928] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:46.941 [2024-06-10 08:02:08.685012] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:46.941 passed 00:06:46.941 Test: verify: DIF not generated, REFTAG check ...passed 00:06:46.941 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:46.941 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-10 08:02:08.685047] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:46.941 [2024-06-10 08:02:08.685205] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:46.941 passed 00:06:46.941 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:46.941 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:46.941 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:46.941 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-10 08:02:08.685535] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:46.941 passed 00:06:46.941 Test: verify copy: DIF generated, GUARD check ...passed 00:06:46.941 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:46.941 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:46.941 Test: verify copy: DIF not generated, GUARD check ...passed 00:06:46.941 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-10 08:02:08.685960] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:46.941 passed 00:06:46.941 Test: verify copy: DIF not generated, REFTAG check ...passed 00:06:46.941 Test: generate copy: DIF generated, GUARD check ...[2024-06-10 08:02:08.686036] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:46.941 [2024-06-10 08:02:08.686078] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:46.941 passed 00:06:46.941 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:46.941 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:46.941 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:46.941 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:46.941 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:46.941 Test: generate copy: iovecs-len validate ...[2024-06-10 08:02:08.686569] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:46.941 passed 00:06:46.941 Test: generate copy: buffer alignment validate ...passed 00:06:46.941 00:06:46.941 Run Summary: Type Total Ran Passed Failed Inactive 00:06:46.941 suites 1 1 n/a 0 0 00:06:46.941 tests 26 26 26 0 0 00:06:46.941 asserts 115 115 115 0 n/a 00:06:46.941 00:06:46.941 Elapsed time = 0.005 seconds 00:06:47.199 00:06:47.199 real 0m0.656s 00:06:47.199 user 0m0.834s 00:06:47.199 sys 0m0.166s 00:06:47.199 ************************************ 00:06:47.199 END TEST accel_dif_functional_tests 00:06:47.199 ************************************ 00:06:47.199 08:02:08 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:47.199 08:02:08 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:47.199 ************************************ 00:06:47.199 END TEST accel 00:06:47.199 ************************************ 00:06:47.199 00:06:47.199 real 0m35.916s 00:06:47.199 user 0m37.282s 00:06:47.199 sys 0m4.225s 00:06:47.199 08:02:08 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:47.199 08:02:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.199 08:02:09 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:47.199 08:02:09 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:47.199 08:02:09 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:47.199 08:02:09 -- common/autotest_common.sh@10 -- # set +x 00:06:47.199 ************************************ 00:06:47.199 START TEST accel_rpc 00:06:47.199 ************************************ 00:06:47.200 08:02:09 accel_rpc -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:47.458 * Looking for test storage... 00:06:47.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:47.458 08:02:09 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:47.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.458 08:02:09 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62103 00:06:47.458 08:02:09 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 62103 00:06:47.458 08:02:09 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 62103 ']' 00:06:47.458 08:02:09 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:47.458 08:02:09 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.458 08:02:09 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:47.458 08:02:09 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.458 08:02:09 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:47.458 08:02:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.458 [2024-06-10 08:02:09.149404] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:47.458 [2024-06-10 08:02:09.149495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62103 ] 00:06:47.458 [2024-06-10 08:02:09.286291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.717 [2024-06-10 08:02:09.424164] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.653 08:02:10 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:48.653 08:02:10 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:48.653 08:02:10 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:48.653 08:02:10 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:48.653 08:02:10 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:48.653 08:02:10 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:48.653 08:02:10 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:48.653 08:02:10 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:48.653 08:02:10 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:48.653 08:02:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.653 ************************************ 00:06:48.653 START TEST accel_assign_opcode 00:06:48.653 ************************************ 00:06:48.653 08:02:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:06:48.653 08:02:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:48.653 08:02:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.653 08:02:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:48.653 [2024-06-10 08:02:10.185055] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:48.653 08:02:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.653 08:02:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:48.653 08:02:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.653 08:02:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:48.653 [2024-06-10 08:02:10.193030] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:48.653 08:02:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.653 08:02:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:48.653 08:02:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.653 08:02:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:48.653 [2024-06-10 08:02:10.256852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.653 08:02:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.653 08:02:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:48.653 08:02:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.653 08:02:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:48.653 08:02:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:48.653 08:02:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:48.653 08:02:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.653 software 00:06:48.653 ************************************ 00:06:48.653 END TEST accel_assign_opcode 00:06:48.653 ************************************ 00:06:48.653 00:06:48.653 real 0m0.303s 00:06:48.653 user 0m0.052s 00:06:48.653 sys 0m0.015s 00:06:48.653 08:02:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:48.653 08:02:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:48.911 08:02:10 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 62103 00:06:48.911 08:02:10 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 62103 ']' 00:06:48.911 08:02:10 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 62103 00:06:48.911 08:02:10 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:06:48.911 08:02:10 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:48.911 08:02:10 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 62103 00:06:48.911 killing process with pid 62103 00:06:48.911 08:02:10 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:48.911 08:02:10 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:48.911 08:02:10 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 62103' 00:06:48.911 08:02:10 accel_rpc -- common/autotest_common.sh@968 -- # kill 62103 00:06:48.911 08:02:10 accel_rpc -- common/autotest_common.sh@973 -- # wait 62103 00:06:49.170 00:06:49.170 real 0m1.953s 00:06:49.170 user 0m2.088s 00:06:49.170 sys 0m0.453s 00:06:49.170 08:02:10 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:49.170 08:02:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.170 ************************************ 00:06:49.170 END TEST accel_rpc 00:06:49.170 ************************************ 00:06:49.170 08:02:11 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:49.170 08:02:11 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:49.170 08:02:11 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:49.170 08:02:11 -- common/autotest_common.sh@10 -- # set +x 00:06:49.170 ************************************ 00:06:49.170 START TEST app_cmdline 00:06:49.170 ************************************ 00:06:49.170 08:02:11 app_cmdline -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:49.428 * Looking for test storage... 00:06:49.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:49.428 08:02:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:49.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.428 08:02:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62190 00:06:49.428 08:02:11 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:49.428 08:02:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62190 00:06:49.428 08:02:11 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 62190 ']' 00:06:49.428 08:02:11 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.428 08:02:11 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:49.428 08:02:11 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.428 08:02:11 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:49.428 08:02:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:49.428 [2024-06-10 08:02:11.183565] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:49.428 [2024-06-10 08:02:11.184101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62190 ] 00:06:49.687 [2024-06-10 08:02:11.323127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.687 [2024-06-10 08:02:11.462414] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.687 [2024-06-10 08:02:11.518435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.646 08:02:12 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:50.646 08:02:12 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:06:50.646 08:02:12 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:50.646 { 00:06:50.646 "version": "SPDK v24.09-pre git sha1 3a44739b7", 00:06:50.646 "fields": { 00:06:50.646 "major": 24, 00:06:50.646 "minor": 9, 00:06:50.646 "patch": 0, 00:06:50.646 "suffix": "-pre", 00:06:50.646 "commit": "3a44739b7" 00:06:50.646 } 00:06:50.646 } 00:06:50.646 08:02:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:50.646 08:02:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:50.646 08:02:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:50.646 08:02:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:50.646 08:02:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:50.646 08:02:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:50.646 08:02:12 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:50.646 08:02:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:50.646 08:02:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:50.646 08:02:12 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:50.646 08:02:12 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:50.646 08:02:12 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:50.646 08:02:12 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:50.646 08:02:12 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:06:50.646 08:02:12 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:50.646 08:02:12 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:50.646 08:02:12 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:50.646 08:02:12 app_cmdline -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:50.646 08:02:12 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:50.646 08:02:12 app_cmdline -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:50.647 08:02:12 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:50.647 08:02:12 app_cmdline -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:50.647 08:02:12 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:50.647 08:02:12 app_cmdline -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:50.905 request: 00:06:50.905 { 00:06:50.905 "method": "env_dpdk_get_mem_stats", 00:06:50.905 "req_id": 1 00:06:50.905 } 00:06:50.905 Got JSON-RPC error response 00:06:50.905 response: 00:06:50.905 { 00:06:50.905 "code": -32601, 00:06:50.905 "message": "Method not found" 00:06:50.905 } 00:06:50.905 08:02:12 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:06:50.905 08:02:12 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:50.905 08:02:12 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:50.905 08:02:12 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:50.905 08:02:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62190 00:06:50.905 08:02:12 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 62190 ']' 00:06:50.905 08:02:12 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 62190 00:06:50.905 08:02:12 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:06:50.905 08:02:12 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:50.905 08:02:12 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 62190 00:06:51.163 killing process with pid 62190 00:06:51.163 08:02:12 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:51.163 08:02:12 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:51.163 08:02:12 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 62190' 00:06:51.163 08:02:12 app_cmdline -- common/autotest_common.sh@968 -- # kill 62190 00:06:51.163 08:02:12 app_cmdline -- common/autotest_common.sh@973 -- # wait 62190 00:06:51.421 00:06:51.421 real 0m2.183s 00:06:51.421 user 0m2.734s 00:06:51.421 sys 0m0.495s 00:06:51.421 08:02:13 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:51.421 ************************************ 00:06:51.421 END TEST app_cmdline 00:06:51.421 ************************************ 00:06:51.421 08:02:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:51.421 08:02:13 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:51.421 08:02:13 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:51.421 08:02:13 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:51.421 08:02:13 -- common/autotest_common.sh@10 -- # set +x 00:06:51.421 ************************************ 00:06:51.421 START TEST version 00:06:51.421 ************************************ 00:06:51.421 08:02:13 version -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:51.679 * Looking for test storage... 00:06:51.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:51.679 08:02:13 version -- app/version.sh@17 -- # get_header_version major 00:06:51.679 08:02:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:51.679 08:02:13 version -- app/version.sh@14 -- # cut -f2 00:06:51.679 08:02:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:51.679 08:02:13 version -- app/version.sh@17 -- # major=24 00:06:51.679 08:02:13 version -- app/version.sh@18 -- # get_header_version minor 00:06:51.679 08:02:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:51.679 08:02:13 version -- app/version.sh@14 -- # cut -f2 00:06:51.679 08:02:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:51.679 08:02:13 version -- app/version.sh@18 -- # minor=9 00:06:51.679 08:02:13 version -- app/version.sh@19 -- # get_header_version patch 00:06:51.679 08:02:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:51.679 08:02:13 version -- app/version.sh@14 -- # cut -f2 00:06:51.679 08:02:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:51.679 08:02:13 version -- app/version.sh@19 -- # patch=0 00:06:51.679 08:02:13 version -- app/version.sh@20 -- # get_header_version suffix 00:06:51.679 08:02:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:51.679 08:02:13 version -- app/version.sh@14 -- # cut -f2 00:06:51.679 08:02:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:51.679 08:02:13 version -- app/version.sh@20 -- # suffix=-pre 00:06:51.679 08:02:13 version -- app/version.sh@22 -- # version=24.9 00:06:51.679 08:02:13 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:51.679 08:02:13 version -- app/version.sh@28 -- # version=24.9rc0 00:06:51.680 08:02:13 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:51.680 08:02:13 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:51.680 08:02:13 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:51.680 08:02:13 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:51.680 ************************************ 00:06:51.680 END TEST version 00:06:51.680 ************************************ 00:06:51.680 00:06:51.680 real 0m0.150s 00:06:51.680 user 0m0.079s 00:06:51.680 sys 0m0.104s 00:06:51.680 08:02:13 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:51.680 08:02:13 version -- common/autotest_common.sh@10 -- # set +x 00:06:51.680 08:02:13 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:51.680 08:02:13 -- spdk/autotest.sh@198 -- # uname -s 00:06:51.680 08:02:13 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:51.680 08:02:13 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:51.680 08:02:13 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:06:51.680 08:02:13 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:06:51.680 08:02:13 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:51.680 08:02:13 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:51.680 08:02:13 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:51.680 08:02:13 -- common/autotest_common.sh@10 -- # set +x 00:06:51.680 ************************************ 00:06:51.680 START TEST spdk_dd 00:06:51.680 ************************************ 00:06:51.680 08:02:13 spdk_dd -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:51.680 * Looking for test storage... 00:06:51.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:51.680 08:02:13 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:51.680 08:02:13 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.680 08:02:13 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.680 08:02:13 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.680 08:02:13 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.680 08:02:13 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.680 08:02:13 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.680 08:02:13 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:51.680 08:02:13 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.680 08:02:13 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:52.248 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:52.248 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:52.248 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:52.248 08:02:13 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:52.248 08:02:13 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@230 -- # local class 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@232 -- # local progif 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@233 -- # class=01 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:06:52.248 08:02:13 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:52.248 08:02:13 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@139 -- # local lib so 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma.so.6.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.248 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.1 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.13.1 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:52.249 08:02:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:52.249 08:02:14 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:52.249 08:02:14 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:52.249 08:02:14 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:52.249 08:02:14 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:52.249 08:02:14 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:52.249 08:02:14 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:52.249 08:02:14 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:52.249 08:02:14 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:52.249 08:02:14 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:52.249 * spdk_dd linked to liburing 00:06:52.249 08:02:14 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:52.249 08:02:14 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:52.249 08:02:14 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:52.250 08:02:14 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:06:52.250 08:02:14 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:52.250 08:02:14 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:52.250 08:02:14 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:52.250 08:02:14 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:06:52.250 08:02:14 spdk_dd -- dd/common.sh@157 -- # return 0 00:06:52.250 08:02:14 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:52.250 08:02:14 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:52.250 08:02:14 spdk_dd -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:52.250 08:02:14 spdk_dd -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:52.250 08:02:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:52.250 ************************************ 00:06:52.250 START TEST spdk_dd_basic_rw 00:06:52.250 ************************************ 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:52.250 * Looking for test storage... 00:06:52.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:52.250 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:52.511 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:52.511 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:52.512 ************************************ 00:06:52.512 START TEST dd_bs_lt_native_bs 00:06:52.512 ************************************ 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@649 -- # local es=0 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.512 08:02:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:52.512 { 00:06:52.512 "subsystems": [ 00:06:52.512 { 00:06:52.512 "subsystem": "bdev", 00:06:52.512 "config": [ 00:06:52.512 { 00:06:52.512 "params": { 00:06:52.512 "trtype": "pcie", 00:06:52.512 "traddr": "0000:00:10.0", 00:06:52.512 "name": "Nvme0" 00:06:52.512 }, 00:06:52.512 "method": "bdev_nvme_attach_controller" 00:06:52.512 }, 00:06:52.512 { 00:06:52.512 "method": "bdev_wait_for_examine" 00:06:52.512 } 00:06:52.512 ] 00:06:52.512 } 00:06:52.512 ] 00:06:52.512 } 00:06:52.512 [2024-06-10 08:02:14.365775] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:52.512 [2024-06-10 08:02:14.365910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62516 ] 00:06:52.771 [2024-06-10 08:02:14.507233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.771 [2024-06-10 08:02:14.616984] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.029 [2024-06-10 08:02:14.676440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.029 [2024-06-10 08:02:14.782567] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:53.029 [2024-06-10 08:02:14.782648] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.287 [2024-06-10 08:02:14.907577] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:53.287 ************************************ 00:06:53.287 END TEST dd_bs_lt_native_bs 00:06:53.287 ************************************ 00:06:53.287 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # es=234 00:06:53.287 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:53.287 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # es=106 00:06:53.287 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # case "$es" in 00:06:53.287 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@669 -- # es=1 00:06:53.287 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:53.287 00:06:53.287 real 0m0.693s 00:06:53.287 user 0m0.473s 00:06:53.287 sys 0m0.175s 00:06:53.287 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:53.287 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:53.287 08:02:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:53.287 08:02:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:53.287 08:02:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:53.287 08:02:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:53.287 ************************************ 00:06:53.287 START TEST dd_rw 00:06:53.287 ************************************ 00:06:53.287 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # basic_rw 4096 00:06:53.287 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:53.287 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:53.287 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:53.288 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:53.288 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:53.288 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:53.288 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:53.288 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:53.288 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:53.288 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:53.288 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:53.288 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:53.288 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:53.288 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:53.288 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:53.288 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:53.288 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:53.288 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:53.853 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:53.853 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:53.853 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:53.853 08:02:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:54.111 [2024-06-10 08:02:15.745899] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:54.111 [2024-06-10 08:02:15.746676] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62547 ] 00:06:54.111 { 00:06:54.111 "subsystems": [ 00:06:54.111 { 00:06:54.111 "subsystem": "bdev", 00:06:54.111 "config": [ 00:06:54.111 { 00:06:54.111 "params": { 00:06:54.111 "trtype": "pcie", 00:06:54.111 "traddr": "0000:00:10.0", 00:06:54.111 "name": "Nvme0" 00:06:54.111 }, 00:06:54.111 "method": "bdev_nvme_attach_controller" 00:06:54.111 }, 00:06:54.111 { 00:06:54.111 "method": "bdev_wait_for_examine" 00:06:54.111 } 00:06:54.111 ] 00:06:54.111 } 00:06:54.111 ] 00:06:54.111 } 00:06:54.111 [2024-06-10 08:02:15.885854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.370 [2024-06-10 08:02:15.994160] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.370 [2024-06-10 08:02:16.051602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.640  Copying: 60/60 [kB] (average 29 MBps) 00:06:54.640 00:06:54.640 08:02:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:54.640 08:02:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:54.640 08:02:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:54.640 08:02:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:54.640 [2024-06-10 08:02:16.435508] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:54.640 [2024-06-10 08:02:16.435598] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62566 ] 00:06:54.640 { 00:06:54.640 "subsystems": [ 00:06:54.640 { 00:06:54.640 "subsystem": "bdev", 00:06:54.640 "config": [ 00:06:54.640 { 00:06:54.640 "params": { 00:06:54.640 "trtype": "pcie", 00:06:54.640 "traddr": "0000:00:10.0", 00:06:54.640 "name": "Nvme0" 00:06:54.640 }, 00:06:54.640 "method": "bdev_nvme_attach_controller" 00:06:54.640 }, 00:06:54.640 { 00:06:54.640 "method": "bdev_wait_for_examine" 00:06:54.640 } 00:06:54.640 ] 00:06:54.640 } 00:06:54.640 ] 00:06:54.640 } 00:06:54.913 [2024-06-10 08:02:16.574485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.913 [2024-06-10 08:02:16.682381] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.913 [2024-06-10 08:02:16.737983] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.430  Copying: 60/60 [kB] (average 19 MBps) 00:06:55.430 00:06:55.430 08:02:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:55.430 08:02:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:55.430 08:02:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:55.430 08:02:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:55.430 08:02:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:55.430 08:02:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:55.430 08:02:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:55.430 08:02:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:55.430 08:02:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:55.430 08:02:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:55.430 08:02:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:55.430 [2024-06-10 08:02:17.126691] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:55.430 [2024-06-10 08:02:17.126830] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62576 ] 00:06:55.430 { 00:06:55.430 "subsystems": [ 00:06:55.430 { 00:06:55.430 "subsystem": "bdev", 00:06:55.430 "config": [ 00:06:55.430 { 00:06:55.430 "params": { 00:06:55.430 "trtype": "pcie", 00:06:55.430 "traddr": "0000:00:10.0", 00:06:55.430 "name": "Nvme0" 00:06:55.430 }, 00:06:55.430 "method": "bdev_nvme_attach_controller" 00:06:55.430 }, 00:06:55.430 { 00:06:55.430 "method": "bdev_wait_for_examine" 00:06:55.430 } 00:06:55.430 ] 00:06:55.430 } 00:06:55.430 ] 00:06:55.430 } 00:06:55.430 [2024-06-10 08:02:17.266302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.689 [2024-06-10 08:02:17.369678] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.689 [2024-06-10 08:02:17.426802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.947  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:55.947 00:06:55.947 08:02:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:55.947 08:02:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:55.947 08:02:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:55.947 08:02:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:55.947 08:02:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:55.947 08:02:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:55.947 08:02:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:56.516 08:02:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:56.516 08:02:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:56.516 08:02:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:56.516 08:02:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:56.775 [2024-06-10 08:02:18.386736] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:56.775 [2024-06-10 08:02:18.386897] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62606 ] 00:06:56.775 { 00:06:56.775 "subsystems": [ 00:06:56.775 { 00:06:56.775 "subsystem": "bdev", 00:06:56.775 "config": [ 00:06:56.775 { 00:06:56.775 "params": { 00:06:56.775 "trtype": "pcie", 00:06:56.775 "traddr": "0000:00:10.0", 00:06:56.775 "name": "Nvme0" 00:06:56.775 }, 00:06:56.775 "method": "bdev_nvme_attach_controller" 00:06:56.775 }, 00:06:56.775 { 00:06:56.775 "method": "bdev_wait_for_examine" 00:06:56.775 } 00:06:56.775 ] 00:06:56.775 } 00:06:56.775 ] 00:06:56.775 } 00:06:56.775 [2024-06-10 08:02:18.522237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.775 [2024-06-10 08:02:18.633079] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.034 [2024-06-10 08:02:18.688635] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.293  Copying: 60/60 [kB] (average 58 MBps) 00:06:57.293 00:06:57.293 08:02:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:57.293 08:02:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:57.293 08:02:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:57.293 08:02:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:57.293 [2024-06-10 08:02:19.057240] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:57.293 [2024-06-10 08:02:19.057366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62614 ] 00:06:57.293 { 00:06:57.293 "subsystems": [ 00:06:57.293 { 00:06:57.293 "subsystem": "bdev", 00:06:57.293 "config": [ 00:06:57.293 { 00:06:57.293 "params": { 00:06:57.293 "trtype": "pcie", 00:06:57.293 "traddr": "0000:00:10.0", 00:06:57.293 "name": "Nvme0" 00:06:57.293 }, 00:06:57.293 "method": "bdev_nvme_attach_controller" 00:06:57.293 }, 00:06:57.293 { 00:06:57.293 "method": "bdev_wait_for_examine" 00:06:57.293 } 00:06:57.293 ] 00:06:57.293 } 00:06:57.293 ] 00:06:57.293 } 00:06:57.552 [2024-06-10 08:02:19.196689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.552 [2024-06-10 08:02:19.307101] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.552 [2024-06-10 08:02:19.362736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.070  Copying: 60/60 [kB] (average 58 MBps) 00:06:58.070 00:06:58.070 08:02:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.070 08:02:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:58.070 08:02:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:58.070 08:02:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:58.070 08:02:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:58.070 08:02:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:58.070 08:02:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:58.070 08:02:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:58.070 08:02:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:58.070 08:02:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:58.070 08:02:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:58.070 [2024-06-10 08:02:19.740439] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:58.070 [2024-06-10 08:02:19.740556] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62635 ] 00:06:58.070 { 00:06:58.070 "subsystems": [ 00:06:58.070 { 00:06:58.070 "subsystem": "bdev", 00:06:58.070 "config": [ 00:06:58.070 { 00:06:58.070 "params": { 00:06:58.070 "trtype": "pcie", 00:06:58.070 "traddr": "0000:00:10.0", 00:06:58.070 "name": "Nvme0" 00:06:58.070 }, 00:06:58.070 "method": "bdev_nvme_attach_controller" 00:06:58.070 }, 00:06:58.070 { 00:06:58.070 "method": "bdev_wait_for_examine" 00:06:58.070 } 00:06:58.070 ] 00:06:58.070 } 00:06:58.070 ] 00:06:58.070 } 00:06:58.070 [2024-06-10 08:02:19.878265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.329 [2024-06-10 08:02:19.985929] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.329 [2024-06-10 08:02:20.040347] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.587  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:58.587 00:06:58.587 08:02:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:58.587 08:02:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:58.587 08:02:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:58.587 08:02:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:58.587 08:02:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:58.587 08:02:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:58.587 08:02:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:58.587 08:02:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:59.154 08:02:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:59.154 08:02:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:59.154 08:02:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:59.154 08:02:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:59.154 [2024-06-10 08:02:20.956912] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:59.154 [2024-06-10 08:02:20.957019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62654 ] 00:06:59.154 { 00:06:59.154 "subsystems": [ 00:06:59.154 { 00:06:59.154 "subsystem": "bdev", 00:06:59.154 "config": [ 00:06:59.154 { 00:06:59.154 "params": { 00:06:59.154 "trtype": "pcie", 00:06:59.154 "traddr": "0000:00:10.0", 00:06:59.154 "name": "Nvme0" 00:06:59.154 }, 00:06:59.154 "method": "bdev_nvme_attach_controller" 00:06:59.154 }, 00:06:59.154 { 00:06:59.154 "method": "bdev_wait_for_examine" 00:06:59.154 } 00:06:59.154 ] 00:06:59.154 } 00:06:59.154 ] 00:06:59.154 } 00:06:59.412 [2024-06-10 08:02:21.097001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.412 [2024-06-10 08:02:21.196741] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.412 [2024-06-10 08:02:21.251493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.929  Copying: 56/56 [kB] (average 54 MBps) 00:06:59.929 00:06:59.929 08:02:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:59.929 08:02:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:59.929 08:02:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:59.929 08:02:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:59.929 [2024-06-10 08:02:21.625931] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:06:59.930 [2024-06-10 08:02:21.626053] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62673 ] 00:06:59.930 { 00:06:59.930 "subsystems": [ 00:06:59.930 { 00:06:59.930 "subsystem": "bdev", 00:06:59.930 "config": [ 00:06:59.930 { 00:06:59.930 "params": { 00:06:59.930 "trtype": "pcie", 00:06:59.930 "traddr": "0000:00:10.0", 00:06:59.930 "name": "Nvme0" 00:06:59.930 }, 00:06:59.930 "method": "bdev_nvme_attach_controller" 00:06:59.930 }, 00:06:59.930 { 00:06:59.930 "method": "bdev_wait_for_examine" 00:06:59.930 } 00:06:59.930 ] 00:06:59.930 } 00:06:59.930 ] 00:06:59.930 } 00:06:59.930 [2024-06-10 08:02:21.765110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.188 [2024-06-10 08:02:21.873207] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.188 [2024-06-10 08:02:21.927477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:00.447  Copying: 56/56 [kB] (average 27 MBps) 00:07:00.447 00:07:00.447 08:02:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:00.447 08:02:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:00.447 08:02:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:00.447 08:02:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:00.447 08:02:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:00.447 08:02:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:00.447 08:02:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:00.447 08:02:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:00.447 08:02:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:00.447 08:02:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:00.447 08:02:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:00.447 [2024-06-10 08:02:22.302298] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:00.447 [2024-06-10 08:02:22.302398] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62683 ] 00:07:00.447 { 00:07:00.447 "subsystems": [ 00:07:00.447 { 00:07:00.447 "subsystem": "bdev", 00:07:00.447 "config": [ 00:07:00.447 { 00:07:00.447 "params": { 00:07:00.447 "trtype": "pcie", 00:07:00.447 "traddr": "0000:00:10.0", 00:07:00.447 "name": "Nvme0" 00:07:00.447 }, 00:07:00.447 "method": "bdev_nvme_attach_controller" 00:07:00.447 }, 00:07:00.447 { 00:07:00.447 "method": "bdev_wait_for_examine" 00:07:00.447 } 00:07:00.447 ] 00:07:00.447 } 00:07:00.447 ] 00:07:00.447 } 00:07:00.706 [2024-06-10 08:02:22.442730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.706 [2024-06-10 08:02:22.547562] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.965 [2024-06-10 08:02:22.607258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.224  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:01.224 00:07:01.224 08:02:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:01.224 08:02:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:01.224 08:02:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:01.224 08:02:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:01.224 08:02:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:01.224 08:02:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:01.224 08:02:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:01.793 08:02:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:01.793 08:02:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:01.793 08:02:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:01.793 08:02:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:01.793 [2024-06-10 08:02:23.526316] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:01.793 [2024-06-10 08:02:23.526424] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62708 ] 00:07:01.793 { 00:07:01.793 "subsystems": [ 00:07:01.793 { 00:07:01.793 "subsystem": "bdev", 00:07:01.793 "config": [ 00:07:01.793 { 00:07:01.793 "params": { 00:07:01.793 "trtype": "pcie", 00:07:01.793 "traddr": "0000:00:10.0", 00:07:01.793 "name": "Nvme0" 00:07:01.793 }, 00:07:01.793 "method": "bdev_nvme_attach_controller" 00:07:01.793 }, 00:07:01.793 { 00:07:01.793 "method": "bdev_wait_for_examine" 00:07:01.793 } 00:07:01.793 ] 00:07:01.793 } 00:07:01.793 ] 00:07:01.793 } 00:07:02.052 [2024-06-10 08:02:23.667823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.052 [2024-06-10 08:02:23.777385] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.052 [2024-06-10 08:02:23.836030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.311  Copying: 56/56 [kB] (average 54 MBps) 00:07:02.311 00:07:02.311 08:02:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:02.311 08:02:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:02.311 08:02:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:02.311 08:02:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:02.569 [2024-06-10 08:02:24.213833] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:02.569 [2024-06-10 08:02:24.213945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62721 ] 00:07:02.569 { 00:07:02.569 "subsystems": [ 00:07:02.569 { 00:07:02.569 "subsystem": "bdev", 00:07:02.569 "config": [ 00:07:02.569 { 00:07:02.569 "params": { 00:07:02.569 "trtype": "pcie", 00:07:02.569 "traddr": "0000:00:10.0", 00:07:02.569 "name": "Nvme0" 00:07:02.569 }, 00:07:02.569 "method": "bdev_nvme_attach_controller" 00:07:02.569 }, 00:07:02.569 { 00:07:02.569 "method": "bdev_wait_for_examine" 00:07:02.569 } 00:07:02.569 ] 00:07:02.569 } 00:07:02.569 ] 00:07:02.569 } 00:07:02.569 [2024-06-10 08:02:24.357178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.828 [2024-06-10 08:02:24.486055] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.828 [2024-06-10 08:02:24.543038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.087  Copying: 56/56 [kB] (average 54 MBps) 00:07:03.087 00:07:03.087 08:02:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:03.087 08:02:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:03.087 08:02:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:03.087 08:02:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:03.087 08:02:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:03.087 08:02:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:03.087 08:02:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:03.087 08:02:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:03.087 08:02:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:03.087 08:02:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:03.087 08:02:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:03.087 [2024-06-10 08:02:24.923166] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:03.087 [2024-06-10 08:02:24.923274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62742 ] 00:07:03.087 { 00:07:03.087 "subsystems": [ 00:07:03.087 { 00:07:03.087 "subsystem": "bdev", 00:07:03.087 "config": [ 00:07:03.087 { 00:07:03.087 "params": { 00:07:03.087 "trtype": "pcie", 00:07:03.087 "traddr": "0000:00:10.0", 00:07:03.087 "name": "Nvme0" 00:07:03.087 }, 00:07:03.087 "method": "bdev_nvme_attach_controller" 00:07:03.087 }, 00:07:03.087 { 00:07:03.087 "method": "bdev_wait_for_examine" 00:07:03.087 } 00:07:03.087 ] 00:07:03.087 } 00:07:03.087 ] 00:07:03.087 } 00:07:03.346 [2024-06-10 08:02:25.064109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.346 [2024-06-10 08:02:25.182683] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.605 [2024-06-10 08:02:25.241980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.864  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:03.864 00:07:03.864 08:02:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:03.864 08:02:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:03.864 08:02:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:03.864 08:02:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:03.864 08:02:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:03.864 08:02:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:03.864 08:02:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:03.864 08:02:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.432 08:02:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:04.432 08:02:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:04.432 08:02:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:04.432 08:02:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.432 [2024-06-10 08:02:26.076258] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:04.432 [2024-06-10 08:02:26.076372] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62761 ] 00:07:04.432 { 00:07:04.432 "subsystems": [ 00:07:04.432 { 00:07:04.432 "subsystem": "bdev", 00:07:04.432 "config": [ 00:07:04.432 { 00:07:04.432 "params": { 00:07:04.432 "trtype": "pcie", 00:07:04.432 "traddr": "0000:00:10.0", 00:07:04.432 "name": "Nvme0" 00:07:04.432 }, 00:07:04.432 "method": "bdev_nvme_attach_controller" 00:07:04.432 }, 00:07:04.432 { 00:07:04.432 "method": "bdev_wait_for_examine" 00:07:04.432 } 00:07:04.432 ] 00:07:04.432 } 00:07:04.432 ] 00:07:04.432 } 00:07:04.432 [2024-06-10 08:02:26.214092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.691 [2024-06-10 08:02:26.307884] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.691 [2024-06-10 08:02:26.361645] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.950  Copying: 48/48 [kB] (average 46 MBps) 00:07:04.950 00:07:04.950 08:02:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:04.950 08:02:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:04.950 08:02:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:04.950 08:02:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.950 [2024-06-10 08:02:26.730186] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:04.950 [2024-06-10 08:02:26.730319] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62776 ] 00:07:04.950 { 00:07:04.950 "subsystems": [ 00:07:04.950 { 00:07:04.950 "subsystem": "bdev", 00:07:04.950 "config": [ 00:07:04.950 { 00:07:04.950 "params": { 00:07:04.950 "trtype": "pcie", 00:07:04.950 "traddr": "0000:00:10.0", 00:07:04.950 "name": "Nvme0" 00:07:04.950 }, 00:07:04.950 "method": "bdev_nvme_attach_controller" 00:07:04.950 }, 00:07:04.950 { 00:07:04.950 "method": "bdev_wait_for_examine" 00:07:04.950 } 00:07:04.950 ] 00:07:04.950 } 00:07:04.950 ] 00:07:04.950 } 00:07:05.209 [2024-06-10 08:02:26.873496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.209 [2024-06-10 08:02:26.982786] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.209 [2024-06-10 08:02:27.036313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.725  Copying: 48/48 [kB] (average 46 MBps) 00:07:05.725 00:07:05.725 08:02:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.725 08:02:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:05.725 08:02:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:05.725 08:02:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:05.725 08:02:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:05.725 08:02:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:05.725 08:02:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:05.725 08:02:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:05.725 08:02:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:05.725 08:02:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:05.725 08:02:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:05.725 [2024-06-10 08:02:27.414911] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:05.725 [2024-06-10 08:02:27.415001] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62790 ] 00:07:05.725 { 00:07:05.725 "subsystems": [ 00:07:05.725 { 00:07:05.725 "subsystem": "bdev", 00:07:05.725 "config": [ 00:07:05.725 { 00:07:05.725 "params": { 00:07:05.725 "trtype": "pcie", 00:07:05.725 "traddr": "0000:00:10.0", 00:07:05.725 "name": "Nvme0" 00:07:05.725 }, 00:07:05.725 "method": "bdev_nvme_attach_controller" 00:07:05.725 }, 00:07:05.725 { 00:07:05.725 "method": "bdev_wait_for_examine" 00:07:05.725 } 00:07:05.725 ] 00:07:05.725 } 00:07:05.725 ] 00:07:05.725 } 00:07:05.725 [2024-06-10 08:02:27.554193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.984 [2024-06-10 08:02:27.661808] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.984 [2024-06-10 08:02:27.716985] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.242  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:06.242 00:07:06.242 08:02:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:06.242 08:02:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:06.242 08:02:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:06.242 08:02:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:06.242 08:02:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:06.242 08:02:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:06.242 08:02:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.809 08:02:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:06.809 08:02:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:06.809 08:02:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:06.809 08:02:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.809 [2024-06-10 08:02:28.536014] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:06.809 [2024-06-10 08:02:28.536139] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62817 ] 00:07:06.809 { 00:07:06.809 "subsystems": [ 00:07:06.809 { 00:07:06.809 "subsystem": "bdev", 00:07:06.809 "config": [ 00:07:06.809 { 00:07:06.809 "params": { 00:07:06.809 "trtype": "pcie", 00:07:06.809 "traddr": "0000:00:10.0", 00:07:06.809 "name": "Nvme0" 00:07:06.809 }, 00:07:06.809 "method": "bdev_nvme_attach_controller" 00:07:06.809 }, 00:07:06.809 { 00:07:06.809 "method": "bdev_wait_for_examine" 00:07:06.809 } 00:07:06.809 ] 00:07:06.809 } 00:07:06.809 ] 00:07:06.809 } 00:07:06.809 [2024-06-10 08:02:28.673719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.067 [2024-06-10 08:02:28.778262] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.067 [2024-06-10 08:02:28.833036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.325  Copying: 48/48 [kB] (average 46 MBps) 00:07:07.325 00:07:07.325 08:02:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:07.325 08:02:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:07.325 08:02:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:07.325 08:02:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:07.584 [2024-06-10 08:02:29.201544] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:07.584 [2024-06-10 08:02:29.202198] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62830 ] 00:07:07.584 { 00:07:07.584 "subsystems": [ 00:07:07.584 { 00:07:07.584 "subsystem": "bdev", 00:07:07.584 "config": [ 00:07:07.584 { 00:07:07.584 "params": { 00:07:07.584 "trtype": "pcie", 00:07:07.584 "traddr": "0000:00:10.0", 00:07:07.584 "name": "Nvme0" 00:07:07.584 }, 00:07:07.584 "method": "bdev_nvme_attach_controller" 00:07:07.584 }, 00:07:07.584 { 00:07:07.584 "method": "bdev_wait_for_examine" 00:07:07.584 } 00:07:07.584 ] 00:07:07.584 } 00:07:07.584 ] 00:07:07.584 } 00:07:07.584 [2024-06-10 08:02:29.342212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.843 [2024-06-10 08:02:29.451395] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.843 [2024-06-10 08:02:29.509650] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.101  Copying: 48/48 [kB] (average 46 MBps) 00:07:08.101 00:07:08.101 08:02:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:08.101 08:02:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:08.101 08:02:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:08.101 08:02:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:08.101 08:02:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:08.101 08:02:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:08.101 08:02:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:08.101 08:02:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:08.101 08:02:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:08.101 08:02:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:08.101 08:02:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:08.102 [2024-06-10 08:02:29.889071] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:08.102 [2024-06-10 08:02:29.889183] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62846 ] 00:07:08.102 { 00:07:08.102 "subsystems": [ 00:07:08.102 { 00:07:08.102 "subsystem": "bdev", 00:07:08.102 "config": [ 00:07:08.102 { 00:07:08.102 "params": { 00:07:08.102 "trtype": "pcie", 00:07:08.102 "traddr": "0000:00:10.0", 00:07:08.102 "name": "Nvme0" 00:07:08.102 }, 00:07:08.102 "method": "bdev_nvme_attach_controller" 00:07:08.102 }, 00:07:08.102 { 00:07:08.102 "method": "bdev_wait_for_examine" 00:07:08.102 } 00:07:08.102 ] 00:07:08.102 } 00:07:08.102 ] 00:07:08.102 } 00:07:08.360 [2024-06-10 08:02:30.030346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.360 [2024-06-10 08:02:30.137224] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.360 [2024-06-10 08:02:30.192263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.877  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:08.877 00:07:08.877 00:07:08.877 real 0m15.462s 00:07:08.877 user 0m11.456s 00:07:08.877 sys 0m5.522s 00:07:08.877 08:02:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:08.877 08:02:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:08.877 ************************************ 00:07:08.877 END TEST dd_rw 00:07:08.877 ************************************ 00:07:08.877 08:02:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:08.877 08:02:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:08.877 08:02:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:08.877 08:02:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:08.877 ************************************ 00:07:08.877 START TEST dd_rw_offset 00:07:08.877 ************************************ 00:07:08.877 08:02:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # basic_offset 00:07:08.877 08:02:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:08.877 08:02:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:08.877 08:02:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:08.877 08:02:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:08.877 08:02:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:08.877 08:02:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=9ht7ybxbwtpqfg3j5x7iu9pwuctt8jgd5q8xxwa6ianlg5jkxbfbyyalwtuzqlilly2592v0n64f0yybq24b47zpb2csnfub7w8j0blv7e2f3qcvszacwkw9atxkr2xpas6ffp1q6al1ys3crd6r3iahekmt00n0o78i0aygtttd845c0lc789l9h15ijwsnqlc9w0c24syrl7u7bxaj189gt9tqc6yiv73hqw2u07vbj35o85r29gnbpkr6z5y1pir7qbh1zt3r9nz5zhge3mcbkuubsktnrdgz1tt09epgzsymbptgeicwmkofh402fzdv9b7y5j3biis0q67a27n0ye5d9ywuz9d0dn1sh412szsu0ive86tpwad4fcmp0c7pv634yv3bbgdk6w94udx8xe9tb5dpgh8mkm2n2n5f2o7bkfitk0pev04tm95bb575bgdhk2m3b4i6aik7ygieuoajeix4h94drmakufft10tlxwfpcjb56jji4260uirqqdm261mzotgbllt1l31hv3eo0lwgaade6q5sn0zw5gd4ckumdstesypz2ngoesojds5q3pcr8cd7d63f9sv5hy34cm2dnu25sx9307o3pe5vgaxf3jkn2zolsdsmtdsolte2kfc9i62tifkhpp43ib03jg4nvpuvb1im3hyp0sccp2khsgdfvzpitv0m5o59m7fun8mtfre7o9skmvrea25mjaxm0aflnmnqidb5yfxsoiodxnlzncg00y2sg6rvzdu4zcm470rbdl924p5ttc26bq457mtbwleluy4mvbxzg2alu1jncozn6eahms962rbce9osl7jcn2jtoq25inv4vmqe782t467gtkw33t6eahwblyx1o175lszo69kadly8jstc0mrep2v2k5dephgwzzp59g27u3j5iwzyizk8qx1gkbt8sxzjga6rhmsqlm2gyokcumkwrtyb9hrgxqv3x2zktvkbxoli5j5flhc6t3kq30n2u7jx2e60fkb5lbrkqd1jtdmwcm5k0l852ayjvu24vunjhruvr2kavr0j8t7a3c12aht99vw7euo2plrtefadyaobwsx08r2f947lwc39348r9bkdhx10drg3lzyd70c50c08xqqfpxork0mfm7sznqlnover8wo11qlk128rrgg8nz9o96f7sz3z5i3udw38agg5mponkb0sfm5kzxtjepridu9qn7ut9ofvvydxyhnluvmnwuwhn04e12nea0smm3bcocrsv3nafim73laks1n7r1x25ya1c3a1qazn4lbl1pkpuf5p0mwa5img7drt37533utmeryvni74h41mm21zro8km1eu14k81nst3dc9ehb0n64955ttvmvkdw8en0tlns2bnju0fhatqipsz20vwdkyv27j658rl29pymozts55qi7y9w8stev4tl93b279j2z809b50mfr2wy4ih2d5h9bmw4wlt933d4azqig77gb5u4bwdj6allmqe95wysafm9edzrsv471ytsltljwyy2x9p3dy4l81rc7vyljcz22xy8cn75fbvzn36yywa2pjgmylsvyisclaqqodlg1tjwqtet7z2scgld5hneun03samv8df3h5596doywmd7iymbv0a7tihvdba9i8uxmqi6c714wlj1ouyv00iy7g8wj3oq5tvgu02wrdmn035xji0aywr1y54d98p2hq6q1zpyg5pquob35ueczbqs4herrmfpbaycwyw5qzkehe8ou8gnibv1eqj0odbv51e6jco9i915h5wob1tykjurjzob761y2f8ok802d1hz54lrkfdgajcave3611k1z36w1s4wok4qpdo9asve3sh06g8n7ggda61afxfnr317olxmvgbmtaiq5currlrpaf9qaksu6r4npdxopit7es4e94ddnp8hyq1ay93i4qgo8bfgdy6cck2lg1mkvfwagssm9g9x0ayj2is6d0l0pbpuyyt1cge1r46j99f9xr9ednd67fmo4n9lkisfjckvowk92rkbu2b1kompqkzry65fceldj3dkobiltfhfpatz46kfcagcj2433dmag29te2xrvo2o16yn4i6k1m7o9o9048j5t6rosoqn9ndr6cogo41myn02aph1fg845sfv9n30q4ixonzn0emk4oj90s4pixeljwpod9011sxyjtssutymu3rr6nsaingkpinmd6r969fck7ikarhrlqfoieh1z9p3ptislfo21rgztj87zr1jq3o4kz46h18gj0m0ubtptkeb6h6jkygnncopu62dt96abuye6lgwt2ildd54glasl9prznnrnrm1ta5u1e3opdwrfsak0enxx457kk0h5t7p5ein7k83ee2a9juyj2ax73qt6628hnbk9s4t0p3lt6l2x36q9l5pz2lyc9rk886djqye96a90c4pzc5epezj1em9gobkc34px14j9gui7w7iospef47exdg2ufa43poxgpf3ls2jnbxq8fqbzcjug7g2tu0kaqu8x5y0ylztp6pyzwvyjxgncuxz8l7e1ygkpiq0sa0qtw83a4g5on32p8un5u680y5zod7arf6iqw4io2wqs9dxt4infsgd9l66662hrz0wc2093hm18336jvbnksictw8lvbiwxeakb5jjiw1qxrq4dmsjmt5ombiuzt6m60du5pavfrh16vwlmuor7z6pbu3x3n01l1codh3ha236udczajaogeunrvn6alsulni7fyg3ql58s4qnt66ss1s1tmk05xpl08btdjvwnlfa8v3b2xcanguxir004eht7of834ochc0mij3ayxasmid5csb3psed4k9uqjtgv8qr3qswzv7cwrtd9yszl5xxdqly2qds6qgbx84tvxm6mmwowg28czl62wov8h0d2do7g673ers2wpr7pyacx82tab8gnsgeqtero0tbh2dxx7so2u47oyu5j2lxrpks1lqtf3ae14ajdn8l01mn9p8n2qqzvdvw7czfrs97dvn0pdzmk5n1et79qcnkyjebssic8g38rni3y4c71zvim7lxj6g2p8yqmnklhoowsrycm34o2ax3ebdi9y0ejkp10xp9rxnfh1megesdrftp6oovcz79q7tl7p4o7cd2hjd9swehtkzezt4yi9o5oz3atmpmx11yp87xbuea8emugdpn2yk1za2k47ow42pfgeusl3nabkjit4wjujll8zonv24duldleiyhw7o01jcnxezzh7in4hblzs49893uwv6rzhbv75shrg26amylsb9z7w6o66hsvwi79r0ym8lqxnc4mhw40nln9xbusf7xfoiyt43d9tjpe8ahujvxwbwajqda8kwxmowmjcdj0jcbnvfyfb44p0prs97kuw1eayt1m49z47zgqaeogj68bxnrjw0tmv17ribatiackwzdgbby3ak6nc0bbyvqjjb4zlzmh1ur3jyvra6v5nj0r2htoh8qc4fdzb4qnazxf9u8k59e9y0nvb8rzir06l7b67lty6ye1ujs948ei5bd2726636mehtbbyrziczwd52sk7p1njmso2q4k2z2apk51qy74asun6bqpv68k8lbo9zc6ctxmpjds5cmdmfpz3rvydczqi5zyf2k11vyjihpqzetkwkay5644rn46367uhhgje53zht6rfx3mdsrphwo930dotvgq9gqtgbiuzsa3q01qp64r8kpp2rnzd2d1sffpg0ps0hi4xu99scrzscmyiwh24olpl0jr2y1co9vkkh3yjn9zd7q7pjqrr1moohegub5l585v8cpyyx3cbk2mhmbxli4dwozfs8x3yxfoiy6s61droijm894wj61hwdkmc3vqv5u0k9hdvz9lgyyas0f2c2y1i6dasxr8lpdg3aq3m9babn5ddegs9uugnu4j6kd09auwpxabsjx68ffjtidyawqw3wauavktveebjjvaga9xozcwl1w7tmc55498ydmmrtuav8bflbptyq5lfjora2uqdt7q906a6nxwhqt2 00:07:08.877 08:02:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:08.877 08:02:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:08.877 08:02:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:08.877 08:02:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:08.877 [2024-06-10 08:02:30.660836] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:08.877 [2024-06-10 08:02:30.661003] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62882 ] 00:07:08.877 { 00:07:08.878 "subsystems": [ 00:07:08.878 { 00:07:08.878 "subsystem": "bdev", 00:07:08.878 "config": [ 00:07:08.878 { 00:07:08.878 "params": { 00:07:08.878 "trtype": "pcie", 00:07:08.878 "traddr": "0000:00:10.0", 00:07:08.878 "name": "Nvme0" 00:07:08.878 }, 00:07:08.878 "method": "bdev_nvme_attach_controller" 00:07:08.878 }, 00:07:08.878 { 00:07:08.878 "method": "bdev_wait_for_examine" 00:07:08.878 } 00:07:08.878 ] 00:07:08.878 } 00:07:08.878 ] 00:07:08.878 } 00:07:09.136 [2024-06-10 08:02:30.803640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.136 [2024-06-10 08:02:30.906803] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.136 [2024-06-10 08:02:30.964623] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.653  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:09.653 00:07:09.653 08:02:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:09.653 08:02:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:09.653 08:02:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:09.653 08:02:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:09.653 [2024-06-10 08:02:31.333341] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:09.653 [2024-06-10 08:02:31.333439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62895 ] 00:07:09.653 { 00:07:09.653 "subsystems": [ 00:07:09.653 { 00:07:09.653 "subsystem": "bdev", 00:07:09.653 "config": [ 00:07:09.653 { 00:07:09.653 "params": { 00:07:09.653 "trtype": "pcie", 00:07:09.653 "traddr": "0000:00:10.0", 00:07:09.653 "name": "Nvme0" 00:07:09.653 }, 00:07:09.653 "method": "bdev_nvme_attach_controller" 00:07:09.653 }, 00:07:09.653 { 00:07:09.653 "method": "bdev_wait_for_examine" 00:07:09.653 } 00:07:09.653 ] 00:07:09.653 } 00:07:09.653 ] 00:07:09.653 } 00:07:09.653 [2024-06-10 08:02:31.473360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.912 [2024-06-10 08:02:31.582241] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.912 [2024-06-10 08:02:31.636402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.171  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:10.171 00:07:10.171 08:02:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:10.171 08:02:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 9ht7ybxbwtpqfg3j5x7iu9pwuctt8jgd5q8xxwa6ianlg5jkxbfbyyalwtuzqlilly2592v0n64f0yybq24b47zpb2csnfub7w8j0blv7e2f3qcvszacwkw9atxkr2xpas6ffp1q6al1ys3crd6r3iahekmt00n0o78i0aygtttd845c0lc789l9h15ijwsnqlc9w0c24syrl7u7bxaj189gt9tqc6yiv73hqw2u07vbj35o85r29gnbpkr6z5y1pir7qbh1zt3r9nz5zhge3mcbkuubsktnrdgz1tt09epgzsymbptgeicwmkofh402fzdv9b7y5j3biis0q67a27n0ye5d9ywuz9d0dn1sh412szsu0ive86tpwad4fcmp0c7pv634yv3bbgdk6w94udx8xe9tb5dpgh8mkm2n2n5f2o7bkfitk0pev04tm95bb575bgdhk2m3b4i6aik7ygieuoajeix4h94drmakufft10tlxwfpcjb56jji4260uirqqdm261mzotgbllt1l31hv3eo0lwgaade6q5sn0zw5gd4ckumdstesypz2ngoesojds5q3pcr8cd7d63f9sv5hy34cm2dnu25sx9307o3pe5vgaxf3jkn2zolsdsmtdsolte2kfc9i62tifkhpp43ib03jg4nvpuvb1im3hyp0sccp2khsgdfvzpitv0m5o59m7fun8mtfre7o9skmvrea25mjaxm0aflnmnqidb5yfxsoiodxnlzncg00y2sg6rvzdu4zcm470rbdl924p5ttc26bq457mtbwleluy4mvbxzg2alu1jncozn6eahms962rbce9osl7jcn2jtoq25inv4vmqe782t467gtkw33t6eahwblyx1o175lszo69kadly8jstc0mrep2v2k5dephgwzzp59g27u3j5iwzyizk8qx1gkbt8sxzjga6rhmsqlm2gyokcumkwrtyb9hrgxqv3x2zktvkbxoli5j5flhc6t3kq30n2u7jx2e60fkb5lbrkqd1jtdmwcm5k0l852ayjvu24vunjhruvr2kavr0j8t7a3c12aht99vw7euo2plrtefadyaobwsx08r2f947lwc39348r9bkdhx10drg3lzyd70c50c08xqqfpxork0mfm7sznqlnover8wo11qlk128rrgg8nz9o96f7sz3z5i3udw38agg5mponkb0sfm5kzxtjepridu9qn7ut9ofvvydxyhnluvmnwuwhn04e12nea0smm3bcocrsv3nafim73laks1n7r1x25ya1c3a1qazn4lbl1pkpuf5p0mwa5img7drt37533utmeryvni74h41mm21zro8km1eu14k81nst3dc9ehb0n64955ttvmvkdw8en0tlns2bnju0fhatqipsz20vwdkyv27j658rl29pymozts55qi7y9w8stev4tl93b279j2z809b50mfr2wy4ih2d5h9bmw4wlt933d4azqig77gb5u4bwdj6allmqe95wysafm9edzrsv471ytsltljwyy2x9p3dy4l81rc7vyljcz22xy8cn75fbvzn36yywa2pjgmylsvyisclaqqodlg1tjwqtet7z2scgld5hneun03samv8df3h5596doywmd7iymbv0a7tihvdba9i8uxmqi6c714wlj1ouyv00iy7g8wj3oq5tvgu02wrdmn035xji0aywr1y54d98p2hq6q1zpyg5pquob35ueczbqs4herrmfpbaycwyw5qzkehe8ou8gnibv1eqj0odbv51e6jco9i915h5wob1tykjurjzob761y2f8ok802d1hz54lrkfdgajcave3611k1z36w1s4wok4qpdo9asve3sh06g8n7ggda61afxfnr317olxmvgbmtaiq5currlrpaf9qaksu6r4npdxopit7es4e94ddnp8hyq1ay93i4qgo8bfgdy6cck2lg1mkvfwagssm9g9x0ayj2is6d0l0pbpuyyt1cge1r46j99f9xr9ednd67fmo4n9lkisfjckvowk92rkbu2b1kompqkzry65fceldj3dkobiltfhfpatz46kfcagcj2433dmag29te2xrvo2o16yn4i6k1m7o9o9048j5t6rosoqn9ndr6cogo41myn02aph1fg845sfv9n30q4ixonzn0emk4oj90s4pixeljwpod9011sxyjtssutymu3rr6nsaingkpinmd6r969fck7ikarhrlqfoieh1z9p3ptislfo21rgztj87zr1jq3o4kz46h18gj0m0ubtptkeb6h6jkygnncopu62dt96abuye6lgwt2ildd54glasl9prznnrnrm1ta5u1e3opdwrfsak0enxx457kk0h5t7p5ein7k83ee2a9juyj2ax73qt6628hnbk9s4t0p3lt6l2x36q9l5pz2lyc9rk886djqye96a90c4pzc5epezj1em9gobkc34px14j9gui7w7iospef47exdg2ufa43poxgpf3ls2jnbxq8fqbzcjug7g2tu0kaqu8x5y0ylztp6pyzwvyjxgncuxz8l7e1ygkpiq0sa0qtw83a4g5on32p8un5u680y5zod7arf6iqw4io2wqs9dxt4infsgd9l66662hrz0wc2093hm18336jvbnksictw8lvbiwxeakb5jjiw1qxrq4dmsjmt5ombiuzt6m60du5pavfrh16vwlmuor7z6pbu3x3n01l1codh3ha236udczajaogeunrvn6alsulni7fyg3ql58s4qnt66ss1s1tmk05xpl08btdjvwnlfa8v3b2xcanguxir004eht7of834ochc0mij3ayxasmid5csb3psed4k9uqjtgv8qr3qswzv7cwrtd9yszl5xxdqly2qds6qgbx84tvxm6mmwowg28czl62wov8h0d2do7g673ers2wpr7pyacx82tab8gnsgeqtero0tbh2dxx7so2u47oyu5j2lxrpks1lqtf3ae14ajdn8l01mn9p8n2qqzvdvw7czfrs97dvn0pdzmk5n1et79qcnkyjebssic8g38rni3y4c71zvim7lxj6g2p8yqmnklhoowsrycm34o2ax3ebdi9y0ejkp10xp9rxnfh1megesdrftp6oovcz79q7tl7p4o7cd2hjd9swehtkzezt4yi9o5oz3atmpmx11yp87xbuea8emugdpn2yk1za2k47ow42pfgeusl3nabkjit4wjujll8zonv24duldleiyhw7o01jcnxezzh7in4hblzs49893uwv6rzhbv75shrg26amylsb9z7w6o66hsvwi79r0ym8lqxnc4mhw40nln9xbusf7xfoiyt43d9tjpe8ahujvxwbwajqda8kwxmowmjcdj0jcbnvfyfb44p0prs97kuw1eayt1m49z47zgqaeogj68bxnrjw0tmv17ribatiackwzdgbby3ak6nc0bbyvqjjb4zlzmh1ur3jyvra6v5nj0r2htoh8qc4fdzb4qnazxf9u8k59e9y0nvb8rzir06l7b67lty6ye1ujs948ei5bd2726636mehtbbyrziczwd52sk7p1njmso2q4k2z2apk51qy74asun6bqpv68k8lbo9zc6ctxmpjds5cmdmfpz3rvydczqi5zyf2k11vyjihpqzetkwkay5644rn46367uhhgje53zht6rfx3mdsrphwo930dotvgq9gqtgbiuzsa3q01qp64r8kpp2rnzd2d1sffpg0ps0hi4xu99scrzscmyiwh24olpl0jr2y1co9vkkh3yjn9zd7q7pjqrr1moohegub5l585v8cpyyx3cbk2mhmbxli4dwozfs8x3yxfoiy6s61droijm894wj61hwdkmc3vqv5u0k9hdvz9lgyyas0f2c2y1i6dasxr8lpdg3aq3m9babn5ddegs9uugnu4j6kd09auwpxabsjx68ffjtidyawqw3wauavktveebjjvaga9xozcwl1w7tmc55498ydmmrtuav8bflbptyq5lfjora2uqdt7q906a6nxwhqt2 == \9\h\t\7\y\b\x\b\w\t\p\q\f\g\3\j\5\x\7\i\u\9\p\w\u\c\t\t\8\j\g\d\5\q\8\x\x\w\a\6\i\a\n\l\g\5\j\k\x\b\f\b\y\y\a\l\w\t\u\z\q\l\i\l\l\y\2\5\9\2\v\0\n\6\4\f\0\y\y\b\q\2\4\b\4\7\z\p\b\2\c\s\n\f\u\b\7\w\8\j\0\b\l\v\7\e\2\f\3\q\c\v\s\z\a\c\w\k\w\9\a\t\x\k\r\2\x\p\a\s\6\f\f\p\1\q\6\a\l\1\y\s\3\c\r\d\6\r\3\i\a\h\e\k\m\t\0\0\n\0\o\7\8\i\0\a\y\g\t\t\t\d\8\4\5\c\0\l\c\7\8\9\l\9\h\1\5\i\j\w\s\n\q\l\c\9\w\0\c\2\4\s\y\r\l\7\u\7\b\x\a\j\1\8\9\g\t\9\t\q\c\6\y\i\v\7\3\h\q\w\2\u\0\7\v\b\j\3\5\o\8\5\r\2\9\g\n\b\p\k\r\6\z\5\y\1\p\i\r\7\q\b\h\1\z\t\3\r\9\n\z\5\z\h\g\e\3\m\c\b\k\u\u\b\s\k\t\n\r\d\g\z\1\t\t\0\9\e\p\g\z\s\y\m\b\p\t\g\e\i\c\w\m\k\o\f\h\4\0\2\f\z\d\v\9\b\7\y\5\j\3\b\i\i\s\0\q\6\7\a\2\7\n\0\y\e\5\d\9\y\w\u\z\9\d\0\d\n\1\s\h\4\1\2\s\z\s\u\0\i\v\e\8\6\t\p\w\a\d\4\f\c\m\p\0\c\7\p\v\6\3\4\y\v\3\b\b\g\d\k\6\w\9\4\u\d\x\8\x\e\9\t\b\5\d\p\g\h\8\m\k\m\2\n\2\n\5\f\2\o\7\b\k\f\i\t\k\0\p\e\v\0\4\t\m\9\5\b\b\5\7\5\b\g\d\h\k\2\m\3\b\4\i\6\a\i\k\7\y\g\i\e\u\o\a\j\e\i\x\4\h\9\4\d\r\m\a\k\u\f\f\t\1\0\t\l\x\w\f\p\c\j\b\5\6\j\j\i\4\2\6\0\u\i\r\q\q\d\m\2\6\1\m\z\o\t\g\b\l\l\t\1\l\3\1\h\v\3\e\o\0\l\w\g\a\a\d\e\6\q\5\s\n\0\z\w\5\g\d\4\c\k\u\m\d\s\t\e\s\y\p\z\2\n\g\o\e\s\o\j\d\s\5\q\3\p\c\r\8\c\d\7\d\6\3\f\9\s\v\5\h\y\3\4\c\m\2\d\n\u\2\5\s\x\9\3\0\7\o\3\p\e\5\v\g\a\x\f\3\j\k\n\2\z\o\l\s\d\s\m\t\d\s\o\l\t\e\2\k\f\c\9\i\6\2\t\i\f\k\h\p\p\4\3\i\b\0\3\j\g\4\n\v\p\u\v\b\1\i\m\3\h\y\p\0\s\c\c\p\2\k\h\s\g\d\f\v\z\p\i\t\v\0\m\5\o\5\9\m\7\f\u\n\8\m\t\f\r\e\7\o\9\s\k\m\v\r\e\a\2\5\m\j\a\x\m\0\a\f\l\n\m\n\q\i\d\b\5\y\f\x\s\o\i\o\d\x\n\l\z\n\c\g\0\0\y\2\s\g\6\r\v\z\d\u\4\z\c\m\4\7\0\r\b\d\l\9\2\4\p\5\t\t\c\2\6\b\q\4\5\7\m\t\b\w\l\e\l\u\y\4\m\v\b\x\z\g\2\a\l\u\1\j\n\c\o\z\n\6\e\a\h\m\s\9\6\2\r\b\c\e\9\o\s\l\7\j\c\n\2\j\t\o\q\2\5\i\n\v\4\v\m\q\e\7\8\2\t\4\6\7\g\t\k\w\3\3\t\6\e\a\h\w\b\l\y\x\1\o\1\7\5\l\s\z\o\6\9\k\a\d\l\y\8\j\s\t\c\0\m\r\e\p\2\v\2\k\5\d\e\p\h\g\w\z\z\p\5\9\g\2\7\u\3\j\5\i\w\z\y\i\z\k\8\q\x\1\g\k\b\t\8\s\x\z\j\g\a\6\r\h\m\s\q\l\m\2\g\y\o\k\c\u\m\k\w\r\t\y\b\9\h\r\g\x\q\v\3\x\2\z\k\t\v\k\b\x\o\l\i\5\j\5\f\l\h\c\6\t\3\k\q\3\0\n\2\u\7\j\x\2\e\6\0\f\k\b\5\l\b\r\k\q\d\1\j\t\d\m\w\c\m\5\k\0\l\8\5\2\a\y\j\v\u\2\4\v\u\n\j\h\r\u\v\r\2\k\a\v\r\0\j\8\t\7\a\3\c\1\2\a\h\t\9\9\v\w\7\e\u\o\2\p\l\r\t\e\f\a\d\y\a\o\b\w\s\x\0\8\r\2\f\9\4\7\l\w\c\3\9\3\4\8\r\9\b\k\d\h\x\1\0\d\r\g\3\l\z\y\d\7\0\c\5\0\c\0\8\x\q\q\f\p\x\o\r\k\0\m\f\m\7\s\z\n\q\l\n\o\v\e\r\8\w\o\1\1\q\l\k\1\2\8\r\r\g\g\8\n\z\9\o\9\6\f\7\s\z\3\z\5\i\3\u\d\w\3\8\a\g\g\5\m\p\o\n\k\b\0\s\f\m\5\k\z\x\t\j\e\p\r\i\d\u\9\q\n\7\u\t\9\o\f\v\v\y\d\x\y\h\n\l\u\v\m\n\w\u\w\h\n\0\4\e\1\2\n\e\a\0\s\m\m\3\b\c\o\c\r\s\v\3\n\a\f\i\m\7\3\l\a\k\s\1\n\7\r\1\x\2\5\y\a\1\c\3\a\1\q\a\z\n\4\l\b\l\1\p\k\p\u\f\5\p\0\m\w\a\5\i\m\g\7\d\r\t\3\7\5\3\3\u\t\m\e\r\y\v\n\i\7\4\h\4\1\m\m\2\1\z\r\o\8\k\m\1\e\u\1\4\k\8\1\n\s\t\3\d\c\9\e\h\b\0\n\6\4\9\5\5\t\t\v\m\v\k\d\w\8\e\n\0\t\l\n\s\2\b\n\j\u\0\f\h\a\t\q\i\p\s\z\2\0\v\w\d\k\y\v\2\7\j\6\5\8\r\l\2\9\p\y\m\o\z\t\s\5\5\q\i\7\y\9\w\8\s\t\e\v\4\t\l\9\3\b\2\7\9\j\2\z\8\0\9\b\5\0\m\f\r\2\w\y\4\i\h\2\d\5\h\9\b\m\w\4\w\l\t\9\3\3\d\4\a\z\q\i\g\7\7\g\b\5\u\4\b\w\d\j\6\a\l\l\m\q\e\9\5\w\y\s\a\f\m\9\e\d\z\r\s\v\4\7\1\y\t\s\l\t\l\j\w\y\y\2\x\9\p\3\d\y\4\l\8\1\r\c\7\v\y\l\j\c\z\2\2\x\y\8\c\n\7\5\f\b\v\z\n\3\6\y\y\w\a\2\p\j\g\m\y\l\s\v\y\i\s\c\l\a\q\q\o\d\l\g\1\t\j\w\q\t\e\t\7\z\2\s\c\g\l\d\5\h\n\e\u\n\0\3\s\a\m\v\8\d\f\3\h\5\5\9\6\d\o\y\w\m\d\7\i\y\m\b\v\0\a\7\t\i\h\v\d\b\a\9\i\8\u\x\m\q\i\6\c\7\1\4\w\l\j\1\o\u\y\v\0\0\i\y\7\g\8\w\j\3\o\q\5\t\v\g\u\0\2\w\r\d\m\n\0\3\5\x\j\i\0\a\y\w\r\1\y\5\4\d\9\8\p\2\h\q\6\q\1\z\p\y\g\5\p\q\u\o\b\3\5\u\e\c\z\b\q\s\4\h\e\r\r\m\f\p\b\a\y\c\w\y\w\5\q\z\k\e\h\e\8\o\u\8\g\n\i\b\v\1\e\q\j\0\o\d\b\v\5\1\e\6\j\c\o\9\i\9\1\5\h\5\w\o\b\1\t\y\k\j\u\r\j\z\o\b\7\6\1\y\2\f\8\o\k\8\0\2\d\1\h\z\5\4\l\r\k\f\d\g\a\j\c\a\v\e\3\6\1\1\k\1\z\3\6\w\1\s\4\w\o\k\4\q\p\d\o\9\a\s\v\e\3\s\h\0\6\g\8\n\7\g\g\d\a\6\1\a\f\x\f\n\r\3\1\7\o\l\x\m\v\g\b\m\t\a\i\q\5\c\u\r\r\l\r\p\a\f\9\q\a\k\s\u\6\r\4\n\p\d\x\o\p\i\t\7\e\s\4\e\9\4\d\d\n\p\8\h\y\q\1\a\y\9\3\i\4\q\g\o\8\b\f\g\d\y\6\c\c\k\2\l\g\1\m\k\v\f\w\a\g\s\s\m\9\g\9\x\0\a\y\j\2\i\s\6\d\0\l\0\p\b\p\u\y\y\t\1\c\g\e\1\r\4\6\j\9\9\f\9\x\r\9\e\d\n\d\6\7\f\m\o\4\n\9\l\k\i\s\f\j\c\k\v\o\w\k\9\2\r\k\b\u\2\b\1\k\o\m\p\q\k\z\r\y\6\5\f\c\e\l\d\j\3\d\k\o\b\i\l\t\f\h\f\p\a\t\z\4\6\k\f\c\a\g\c\j\2\4\3\3\d\m\a\g\2\9\t\e\2\x\r\v\o\2\o\1\6\y\n\4\i\6\k\1\m\7\o\9\o\9\0\4\8\j\5\t\6\r\o\s\o\q\n\9\n\d\r\6\c\o\g\o\4\1\m\y\n\0\2\a\p\h\1\f\g\8\4\5\s\f\v\9\n\3\0\q\4\i\x\o\n\z\n\0\e\m\k\4\o\j\9\0\s\4\p\i\x\e\l\j\w\p\o\d\9\0\1\1\s\x\y\j\t\s\s\u\t\y\m\u\3\r\r\6\n\s\a\i\n\g\k\p\i\n\m\d\6\r\9\6\9\f\c\k\7\i\k\a\r\h\r\l\q\f\o\i\e\h\1\z\9\p\3\p\t\i\s\l\f\o\2\1\r\g\z\t\j\8\7\z\r\1\j\q\3\o\4\k\z\4\6\h\1\8\g\j\0\m\0\u\b\t\p\t\k\e\b\6\h\6\j\k\y\g\n\n\c\o\p\u\6\2\d\t\9\6\a\b\u\y\e\6\l\g\w\t\2\i\l\d\d\5\4\g\l\a\s\l\9\p\r\z\n\n\r\n\r\m\1\t\a\5\u\1\e\3\o\p\d\w\r\f\s\a\k\0\e\n\x\x\4\5\7\k\k\0\h\5\t\7\p\5\e\i\n\7\k\8\3\e\e\2\a\9\j\u\y\j\2\a\x\7\3\q\t\6\6\2\8\h\n\b\k\9\s\4\t\0\p\3\l\t\6\l\2\x\3\6\q\9\l\5\p\z\2\l\y\c\9\r\k\8\8\6\d\j\q\y\e\9\6\a\9\0\c\4\p\z\c\5\e\p\e\z\j\1\e\m\9\g\o\b\k\c\3\4\p\x\1\4\j\9\g\u\i\7\w\7\i\o\s\p\e\f\4\7\e\x\d\g\2\u\f\a\4\3\p\o\x\g\p\f\3\l\s\2\j\n\b\x\q\8\f\q\b\z\c\j\u\g\7\g\2\t\u\0\k\a\q\u\8\x\5\y\0\y\l\z\t\p\6\p\y\z\w\v\y\j\x\g\n\c\u\x\z\8\l\7\e\1\y\g\k\p\i\q\0\s\a\0\q\t\w\8\3\a\4\g\5\o\n\3\2\p\8\u\n\5\u\6\8\0\y\5\z\o\d\7\a\r\f\6\i\q\w\4\i\o\2\w\q\s\9\d\x\t\4\i\n\f\s\g\d\9\l\6\6\6\6\2\h\r\z\0\w\c\2\0\9\3\h\m\1\8\3\3\6\j\v\b\n\k\s\i\c\t\w\8\l\v\b\i\w\x\e\a\k\b\5\j\j\i\w\1\q\x\r\q\4\d\m\s\j\m\t\5\o\m\b\i\u\z\t\6\m\6\0\d\u\5\p\a\v\f\r\h\1\6\v\w\l\m\u\o\r\7\z\6\p\b\u\3\x\3\n\0\1\l\1\c\o\d\h\3\h\a\2\3\6\u\d\c\z\a\j\a\o\g\e\u\n\r\v\n\6\a\l\s\u\l\n\i\7\f\y\g\3\q\l\5\8\s\4\q\n\t\6\6\s\s\1\s\1\t\m\k\0\5\x\p\l\0\8\b\t\d\j\v\w\n\l\f\a\8\v\3\b\2\x\c\a\n\g\u\x\i\r\0\0\4\e\h\t\7\o\f\8\3\4\o\c\h\c\0\m\i\j\3\a\y\x\a\s\m\i\d\5\c\s\b\3\p\s\e\d\4\k\9\u\q\j\t\g\v\8\q\r\3\q\s\w\z\v\7\c\w\r\t\d\9\y\s\z\l\5\x\x\d\q\l\y\2\q\d\s\6\q\g\b\x\8\4\t\v\x\m\6\m\m\w\o\w\g\2\8\c\z\l\6\2\w\o\v\8\h\0\d\2\d\o\7\g\6\7\3\e\r\s\2\w\p\r\7\p\y\a\c\x\8\2\t\a\b\8\g\n\s\g\e\q\t\e\r\o\0\t\b\h\2\d\x\x\7\s\o\2\u\4\7\o\y\u\5\j\2\l\x\r\p\k\s\1\l\q\t\f\3\a\e\1\4\a\j\d\n\8\l\0\1\m\n\9\p\8\n\2\q\q\z\v\d\v\w\7\c\z\f\r\s\9\7\d\v\n\0\p\d\z\m\k\5\n\1\e\t\7\9\q\c\n\k\y\j\e\b\s\s\i\c\8\g\3\8\r\n\i\3\y\4\c\7\1\z\v\i\m\7\l\x\j\6\g\2\p\8\y\q\m\n\k\l\h\o\o\w\s\r\y\c\m\3\4\o\2\a\x\3\e\b\d\i\9\y\0\e\j\k\p\1\0\x\p\9\r\x\n\f\h\1\m\e\g\e\s\d\r\f\t\p\6\o\o\v\c\z\7\9\q\7\t\l\7\p\4\o\7\c\d\2\h\j\d\9\s\w\e\h\t\k\z\e\z\t\4\y\i\9\o\5\o\z\3\a\t\m\p\m\x\1\1\y\p\8\7\x\b\u\e\a\8\e\m\u\g\d\p\n\2\y\k\1\z\a\2\k\4\7\o\w\4\2\p\f\g\e\u\s\l\3\n\a\b\k\j\i\t\4\w\j\u\j\l\l\8\z\o\n\v\2\4\d\u\l\d\l\e\i\y\h\w\7\o\0\1\j\c\n\x\e\z\z\h\7\i\n\4\h\b\l\z\s\4\9\8\9\3\u\w\v\6\r\z\h\b\v\7\5\s\h\r\g\2\6\a\m\y\l\s\b\9\z\7\w\6\o\6\6\h\s\v\w\i\7\9\r\0\y\m\8\l\q\x\n\c\4\m\h\w\4\0\n\l\n\9\x\b\u\s\f\7\x\f\o\i\y\t\4\3\d\9\t\j\p\e\8\a\h\u\j\v\x\w\b\w\a\j\q\d\a\8\k\w\x\m\o\w\m\j\c\d\j\0\j\c\b\n\v\f\y\f\b\4\4\p\0\p\r\s\9\7\k\u\w\1\e\a\y\t\1\m\4\9\z\4\7\z\g\q\a\e\o\g\j\6\8\b\x\n\r\j\w\0\t\m\v\1\7\r\i\b\a\t\i\a\c\k\w\z\d\g\b\b\y\3\a\k\6\n\c\0\b\b\y\v\q\j\j\b\4\z\l\z\m\h\1\u\r\3\j\y\v\r\a\6\v\5\n\j\0\r\2\h\t\o\h\8\q\c\4\f\d\z\b\4\q\n\a\z\x\f\9\u\8\k\5\9\e\9\y\0\n\v\b\8\r\z\i\r\0\6\l\7\b\6\7\l\t\y\6\y\e\1\u\j\s\9\4\8\e\i\5\b\d\2\7\2\6\6\3\6\m\e\h\t\b\b\y\r\z\i\c\z\w\d\5\2\s\k\7\p\1\n\j\m\s\o\2\q\4\k\2\z\2\a\p\k\5\1\q\y\7\4\a\s\u\n\6\b\q\p\v\6\8\k\8\l\b\o\9\z\c\6\c\t\x\m\p\j\d\s\5\c\m\d\m\f\p\z\3\r\v\y\d\c\z\q\i\5\z\y\f\2\k\1\1\v\y\j\i\h\p\q\z\e\t\k\w\k\a\y\5\6\4\4\r\n\4\6\3\6\7\u\h\h\g\j\e\5\3\z\h\t\6\r\f\x\3\m\d\s\r\p\h\w\o\9\3\0\d\o\t\v\g\q\9\g\q\t\g\b\i\u\z\s\a\3\q\0\1\q\p\6\4\r\8\k\p\p\2\r\n\z\d\2\d\1\s\f\f\p\g\0\p\s\0\h\i\4\x\u\9\9\s\c\r\z\s\c\m\y\i\w\h\2\4\o\l\p\l\0\j\r\2\y\1\c\o\9\v\k\k\h\3\y\j\n\9\z\d\7\q\7\p\j\q\r\r\1\m\o\o\h\e\g\u\b\5\l\5\8\5\v\8\c\p\y\y\x\3\c\b\k\2\m\h\m\b\x\l\i\4\d\w\o\z\f\s\8\x\3\y\x\f\o\i\y\6\s\6\1\d\r\o\i\j\m\8\9\4\w\j\6\1\h\w\d\k\m\c\3\v\q\v\5\u\0\k\9\h\d\v\z\9\l\g\y\y\a\s\0\f\2\c\2\y\1\i\6\d\a\s\x\r\8\l\p\d\g\3\a\q\3\m\9\b\a\b\n\5\d\d\e\g\s\9\u\u\g\n\u\4\j\6\k\d\0\9\a\u\w\p\x\a\b\s\j\x\6\8\f\f\j\t\i\d\y\a\w\q\w\3\w\a\u\a\v\k\t\v\e\e\b\j\j\v\a\g\a\9\x\o\z\c\w\l\1\w\7\t\m\c\5\5\4\9\8\y\d\m\m\r\t\u\a\v\8\b\f\l\b\p\t\y\q\5\l\f\j\o\r\a\2\u\q\d\t\7\q\9\0\6\a\6\n\x\w\h\q\t\2 ]] 00:07:10.171 ************************************ 00:07:10.171 END TEST dd_rw_offset 00:07:10.171 ************************************ 00:07:10.171 00:07:10.171 real 0m1.387s 00:07:10.171 user 0m0.959s 00:07:10.172 sys 0m0.609s 00:07:10.172 08:02:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:10.172 08:02:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:10.172 08:02:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:10.172 08:02:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:10.172 08:02:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:10.172 08:02:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:10.172 08:02:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:10.172 08:02:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:10.172 08:02:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:10.172 08:02:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:10.172 08:02:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:10.172 08:02:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:10.172 08:02:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:10.432 [2024-06-10 08:02:32.039965] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:10.432 [2024-06-10 08:02:32.040065] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62929 ] 00:07:10.432 { 00:07:10.432 "subsystems": [ 00:07:10.432 { 00:07:10.432 "subsystem": "bdev", 00:07:10.432 "config": [ 00:07:10.432 { 00:07:10.432 "params": { 00:07:10.432 "trtype": "pcie", 00:07:10.432 "traddr": "0000:00:10.0", 00:07:10.432 "name": "Nvme0" 00:07:10.432 }, 00:07:10.432 "method": "bdev_nvme_attach_controller" 00:07:10.432 }, 00:07:10.432 { 00:07:10.432 "method": "bdev_wait_for_examine" 00:07:10.432 } 00:07:10.432 ] 00:07:10.432 } 00:07:10.432 ] 00:07:10.432 } 00:07:10.432 [2024-06-10 08:02:32.178764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.432 [2024-06-10 08:02:32.283405] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.689 [2024-06-10 08:02:32.337887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.947  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:10.947 00:07:10.947 08:02:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:10.947 00:07:10.947 real 0m18.661s 00:07:10.947 user 0m13.503s 00:07:10.947 sys 0m6.806s 00:07:10.947 08:02:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:10.947 08:02:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:10.947 ************************************ 00:07:10.947 END TEST spdk_dd_basic_rw 00:07:10.947 ************************************ 00:07:10.947 08:02:32 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:10.947 08:02:32 spdk_dd -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:10.947 08:02:32 spdk_dd -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:10.947 08:02:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:10.947 ************************************ 00:07:10.947 START TEST spdk_dd_posix 00:07:10.947 ************************************ 00:07:10.947 08:02:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:10.947 * Looking for test storage... 00:07:11.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:11.205 * First test run, liburing in use 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:11.205 ************************************ 00:07:11.205 START TEST dd_flag_append 00:07:11.205 ************************************ 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # append 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=xsndjqo0seudjpbun4kl5o981ig9mwnr 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=s2xp5vle0upst235v0kydjt6yd75ahza 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s xsndjqo0seudjpbun4kl5o981ig9mwnr 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s s2xp5vle0upst235v0kydjt6yd75ahza 00:07:11.205 08:02:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:11.205 [2024-06-10 08:02:32.884919] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:11.205 [2024-06-10 08:02:32.885033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62989 ] 00:07:11.205 [2024-06-10 08:02:33.022890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.463 [2024-06-10 08:02:33.132511] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.463 [2024-06-10 08:02:33.185670] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:11.722  Copying: 32/32 [B] (average 31 kBps) 00:07:11.722 00:07:11.722 08:02:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ s2xp5vle0upst235v0kydjt6yd75ahzaxsndjqo0seudjpbun4kl5o981ig9mwnr == \s\2\x\p\5\v\l\e\0\u\p\s\t\2\3\5\v\0\k\y\d\j\t\6\y\d\7\5\a\h\z\a\x\s\n\d\j\q\o\0\s\e\u\d\j\p\b\u\n\4\k\l\5\o\9\8\1\i\g\9\m\w\n\r ]] 00:07:11.722 00:07:11.722 real 0m0.597s 00:07:11.722 user 0m0.344s 00:07:11.722 sys 0m0.265s 00:07:11.722 08:02:33 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:11.722 ************************************ 00:07:11.722 END TEST dd_flag_append 00:07:11.722 08:02:33 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:11.722 ************************************ 00:07:11.722 08:02:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:11.722 08:02:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:11.722 08:02:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:11.722 08:02:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:11.722 ************************************ 00:07:11.722 START TEST dd_flag_directory 00:07:11.722 ************************************ 00:07:11.722 08:02:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # directory 00:07:11.722 08:02:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:11.722 08:02:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@649 -- # local es=0 00:07:11.722 08:02:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:11.722 08:02:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.722 08:02:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:11.722 08:02:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.722 08:02:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:11.722 08:02:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.722 08:02:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:11.722 08:02:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.722 08:02:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:11.722 08:02:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:11.722 [2024-06-10 08:02:33.520607] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:11.722 [2024-06-10 08:02:33.520705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63017 ] 00:07:11.982 [2024-06-10 08:02:33.654690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.982 [2024-06-10 08:02:33.761090] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.982 [2024-06-10 08:02:33.813331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:11.982 [2024-06-10 08:02:33.844970] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:11.982 [2024-06-10 08:02:33.845033] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:11.982 [2024-06-10 08:02:33.845049] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:12.240 [2024-06-10 08:02:33.956494] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:12.240 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # es=236 00:07:12.240 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:12.241 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # es=108 00:07:12.241 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # case "$es" in 00:07:12.241 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@669 -- # es=1 00:07:12.241 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:12.241 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:12.241 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@649 -- # local es=0 00:07:12.241 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:12.241 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.241 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:12.241 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.241 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:12.241 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.241 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:12.241 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.241 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:12.241 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:12.499 [2024-06-10 08:02:34.123274] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:12.499 [2024-06-10 08:02:34.123419] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63027 ] 00:07:12.499 [2024-06-10 08:02:34.266824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.758 [2024-06-10 08:02:34.372537] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.758 [2024-06-10 08:02:34.425729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.758 [2024-06-10 08:02:34.457759] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:12.758 [2024-06-10 08:02:34.457844] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:12.758 [2024-06-10 08:02:34.457876] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:12.758 [2024-06-10 08:02:34.568800] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:13.017 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # es=236 00:07:13.017 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:13.017 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # es=108 00:07:13.017 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # case "$es" in 00:07:13.017 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@669 -- # es=1 00:07:13.017 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:13.017 00:07:13.017 real 0m1.197s 00:07:13.017 user 0m0.683s 00:07:13.017 sys 0m0.304s 00:07:13.017 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:13.017 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:13.017 ************************************ 00:07:13.017 END TEST dd_flag_directory 00:07:13.017 ************************************ 00:07:13.017 08:02:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:13.017 08:02:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:13.017 08:02:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:13.017 08:02:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:13.017 ************************************ 00:07:13.017 START TEST dd_flag_nofollow 00:07:13.017 ************************************ 00:07:13.017 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # nofollow 00:07:13.017 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:13.017 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:13.017 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:13.017 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:13.018 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:13.018 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@649 -- # local es=0 00:07:13.018 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:13.018 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.018 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:13.018 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.018 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:13.018 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.018 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:13.018 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.018 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:13.018 08:02:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:13.018 [2024-06-10 08:02:34.781356] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:13.018 [2024-06-10 08:02:34.781486] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63061 ] 00:07:13.277 [2024-06-10 08:02:34.914449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.277 [2024-06-10 08:02:35.025654] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.277 [2024-06-10 08:02:35.078938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.277 [2024-06-10 08:02:35.111989] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:13.277 [2024-06-10 08:02:35.112062] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:13.277 [2024-06-10 08:02:35.112093] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:13.536 [2024-06-10 08:02:35.225525] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:13.536 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # es=216 00:07:13.536 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:13.536 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # es=88 00:07:13.536 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # case "$es" in 00:07:13.536 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@669 -- # es=1 00:07:13.536 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:13.536 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:13.536 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@649 -- # local es=0 00:07:13.536 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:13.536 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.536 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:13.536 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.536 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:13.536 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.536 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:13.536 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.536 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:13.536 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:13.536 [2024-06-10 08:02:35.401126] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:13.536 [2024-06-10 08:02:35.401333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63070 ] 00:07:13.794 [2024-06-10 08:02:35.544717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.053 [2024-06-10 08:02:35.663022] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.053 [2024-06-10 08:02:35.717674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.053 [2024-06-10 08:02:35.750204] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:14.053 [2024-06-10 08:02:35.750277] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:14.053 [2024-06-10 08:02:35.750309] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:14.053 [2024-06-10 08:02:35.864276] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:14.313 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # es=216 00:07:14.313 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:14.313 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # es=88 00:07:14.313 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # case "$es" in 00:07:14.313 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@669 -- # es=1 00:07:14.313 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:14.313 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:14.313 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:14.313 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:14.313 08:02:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:14.313 [2024-06-10 08:02:36.032866] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:14.313 [2024-06-10 08:02:36.032972] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63078 ] 00:07:14.313 [2024-06-10 08:02:36.167903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.573 [2024-06-10 08:02:36.283473] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.573 [2024-06-10 08:02:36.338881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.836  Copying: 512/512 [B] (average 500 kBps) 00:07:14.836 00:07:14.836 08:02:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 2y5o2eowl3hsab5etznf523g9odl5ctms09sclcygm7gnf1q5s0ae9rm45zadxaul82xgnaml2fznq7xsuxhocee6mjl8z3kr5wtqoycgbcxw41i2m5hrjs46jp6nw9lgvophu4v92pt83y60gmosqwpzqy02dfols9lmrk3y8ppo6wvow3ezv4kptepyggkstv9a63mi7xbw5cwerqlr4ihn06cruokzrh884rjtokdh1yfhjn6ip7gd58w10zh00buh3hpfjx30b35c8clx4z7qek2irgdgp7t9u8vco5e0udtef0zxhvv7nlg6xhdx7ncp32418zgrh1fgr3icgey31175rkgyj37auq4tra85961z9wofw81kfxaz46vzsjdgn5rjvgrttqdch393eoq47h9f8c3b87yte4qgq9zynbkbcq0unrfedj24a7iknryt7dqk7bckdtjvchktj9aaus0v38avnwbj4qngpgygcf03em6viopk5shlyva == \2\y\5\o\2\e\o\w\l\3\h\s\a\b\5\e\t\z\n\f\5\2\3\g\9\o\d\l\5\c\t\m\s\0\9\s\c\l\c\y\g\m\7\g\n\f\1\q\5\s\0\a\e\9\r\m\4\5\z\a\d\x\a\u\l\8\2\x\g\n\a\m\l\2\f\z\n\q\7\x\s\u\x\h\o\c\e\e\6\m\j\l\8\z\3\k\r\5\w\t\q\o\y\c\g\b\c\x\w\4\1\i\2\m\5\h\r\j\s\4\6\j\p\6\n\w\9\l\g\v\o\p\h\u\4\v\9\2\p\t\8\3\y\6\0\g\m\o\s\q\w\p\z\q\y\0\2\d\f\o\l\s\9\l\m\r\k\3\y\8\p\p\o\6\w\v\o\w\3\e\z\v\4\k\p\t\e\p\y\g\g\k\s\t\v\9\a\6\3\m\i\7\x\b\w\5\c\w\e\r\q\l\r\4\i\h\n\0\6\c\r\u\o\k\z\r\h\8\8\4\r\j\t\o\k\d\h\1\y\f\h\j\n\6\i\p\7\g\d\5\8\w\1\0\z\h\0\0\b\u\h\3\h\p\f\j\x\3\0\b\3\5\c\8\c\l\x\4\z\7\q\e\k\2\i\r\g\d\g\p\7\t\9\u\8\v\c\o\5\e\0\u\d\t\e\f\0\z\x\h\v\v\7\n\l\g\6\x\h\d\x\7\n\c\p\3\2\4\1\8\z\g\r\h\1\f\g\r\3\i\c\g\e\y\3\1\1\7\5\r\k\g\y\j\3\7\a\u\q\4\t\r\a\8\5\9\6\1\z\9\w\o\f\w\8\1\k\f\x\a\z\4\6\v\z\s\j\d\g\n\5\r\j\v\g\r\t\t\q\d\c\h\3\9\3\e\o\q\4\7\h\9\f\8\c\3\b\8\7\y\t\e\4\q\g\q\9\z\y\n\b\k\b\c\q\0\u\n\r\f\e\d\j\2\4\a\7\i\k\n\r\y\t\7\d\q\k\7\b\c\k\d\t\j\v\c\h\k\t\j\9\a\a\u\s\0\v\3\8\a\v\n\w\b\j\4\q\n\g\p\g\y\g\c\f\0\3\e\m\6\v\i\o\p\k\5\s\h\l\y\v\a ]] 00:07:14.836 00:07:14.836 real 0m1.860s 00:07:14.836 user 0m1.081s 00:07:14.836 sys 0m0.584s 00:07:14.837 08:02:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:14.837 ************************************ 00:07:14.837 END TEST dd_flag_nofollow 00:07:14.837 ************************************ 00:07:14.837 08:02:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:14.837 08:02:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:14.837 08:02:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:14.837 08:02:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:14.837 08:02:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:14.837 ************************************ 00:07:14.837 START TEST dd_flag_noatime 00:07:14.837 ************************************ 00:07:14.837 08:02:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # noatime 00:07:14.837 08:02:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:14.837 08:02:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:14.837 08:02:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:14.837 08:02:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:14.837 08:02:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:14.837 08:02:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:14.837 08:02:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1718006556 00:07:14.837 08:02:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:14.837 08:02:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1718006556 00:07:14.837 08:02:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:16.214 08:02:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:16.214 [2024-06-10 08:02:37.702935] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:16.214 [2024-06-10 08:02:37.703042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63126 ] 00:07:16.214 [2024-06-10 08:02:37.838148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.214 [2024-06-10 08:02:37.967584] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.214 [2024-06-10 08:02:38.023900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.474  Copying: 512/512 [B] (average 500 kBps) 00:07:16.474 00:07:16.474 08:02:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:16.474 08:02:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1718006556 )) 00:07:16.474 08:02:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:16.474 08:02:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1718006556 )) 00:07:16.475 08:02:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:16.475 [2024-06-10 08:02:38.332443] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:16.475 [2024-06-10 08:02:38.332546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63134 ] 00:07:16.734 [2024-06-10 08:02:38.466989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.734 [2024-06-10 08:02:38.584983] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.993 [2024-06-10 08:02:38.639564] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.252  Copying: 512/512 [B] (average 500 kBps) 00:07:17.252 00:07:17.252 08:02:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:17.252 ************************************ 00:07:17.252 END TEST dd_flag_noatime 00:07:17.252 ************************************ 00:07:17.252 08:02:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1718006558 )) 00:07:17.252 00:07:17.252 real 0m2.258s 00:07:17.252 user 0m0.718s 00:07:17.252 sys 0m0.578s 00:07:17.252 08:02:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:17.252 08:02:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:17.252 08:02:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:17.252 08:02:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:17.252 08:02:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:17.252 08:02:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:17.252 ************************************ 00:07:17.252 START TEST dd_flags_misc 00:07:17.252 ************************************ 00:07:17.252 08:02:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # io 00:07:17.252 08:02:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:17.252 08:02:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:17.252 08:02:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:17.252 08:02:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:17.252 08:02:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:17.252 08:02:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:17.252 08:02:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:17.252 08:02:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:17.252 08:02:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:17.252 [2024-06-10 08:02:39.005766] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:17.252 [2024-06-10 08:02:39.006070] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63168 ] 00:07:17.511 [2024-06-10 08:02:39.141960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.511 [2024-06-10 08:02:39.258384] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.511 [2024-06-10 08:02:39.311837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.770  Copying: 512/512 [B] (average 500 kBps) 00:07:17.770 00:07:17.770 08:02:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lhyawg15rzyswo9s9k947396peicmbiibrhmnvquo7z7lrwzk6gr2zpwizsy8zs6vklnptrwhe2psm1k4toy9eowm6j7wn3qz4xiqnaxvx70ge7xqz15j93s4kewsa3iwotza2mjx716afx4ysaj5ylwlwr3mqkk1m05nwsylyhairn11ro9tmbpfletvbo5m8qhfujxx3oh27gzbuay3yrz02utdhc396yon6zbkqokw1szma2djybq4r5lwkcklh9m6fbalmvzattmx2pzlfi07xo2vw39kg26yunlmhgvv4l3gjleoy1ecbrlzjb7x4wzoejb03ztv8e0gpz2onylmkcn1f88lu5hr5dtwxux94qbs93fs2krddia5jtjp6kc0fur5jyxyvul4gpo9hyp0q73rjrb8h3jvsqk5h3gk6djx2uwchomtpvgnabcrnrmfwku9k39kc0tfuk1deokktmnvtjfmjiiqjmydziin8wcmwcafqcskaue7ctt == \l\h\y\a\w\g\1\5\r\z\y\s\w\o\9\s\9\k\9\4\7\3\9\6\p\e\i\c\m\b\i\i\b\r\h\m\n\v\q\u\o\7\z\7\l\r\w\z\k\6\g\r\2\z\p\w\i\z\s\y\8\z\s\6\v\k\l\n\p\t\r\w\h\e\2\p\s\m\1\k\4\t\o\y\9\e\o\w\m\6\j\7\w\n\3\q\z\4\x\i\q\n\a\x\v\x\7\0\g\e\7\x\q\z\1\5\j\9\3\s\4\k\e\w\s\a\3\i\w\o\t\z\a\2\m\j\x\7\1\6\a\f\x\4\y\s\a\j\5\y\l\w\l\w\r\3\m\q\k\k\1\m\0\5\n\w\s\y\l\y\h\a\i\r\n\1\1\r\o\9\t\m\b\p\f\l\e\t\v\b\o\5\m\8\q\h\f\u\j\x\x\3\o\h\2\7\g\z\b\u\a\y\3\y\r\z\0\2\u\t\d\h\c\3\9\6\y\o\n\6\z\b\k\q\o\k\w\1\s\z\m\a\2\d\j\y\b\q\4\r\5\l\w\k\c\k\l\h\9\m\6\f\b\a\l\m\v\z\a\t\t\m\x\2\p\z\l\f\i\0\7\x\o\2\v\w\3\9\k\g\2\6\y\u\n\l\m\h\g\v\v\4\l\3\g\j\l\e\o\y\1\e\c\b\r\l\z\j\b\7\x\4\w\z\o\e\j\b\0\3\z\t\v\8\e\0\g\p\z\2\o\n\y\l\m\k\c\n\1\f\8\8\l\u\5\h\r\5\d\t\w\x\u\x\9\4\q\b\s\9\3\f\s\2\k\r\d\d\i\a\5\j\t\j\p\6\k\c\0\f\u\r\5\j\y\x\y\v\u\l\4\g\p\o\9\h\y\p\0\q\7\3\r\j\r\b\8\h\3\j\v\s\q\k\5\h\3\g\k\6\d\j\x\2\u\w\c\h\o\m\t\p\v\g\n\a\b\c\r\n\r\m\f\w\k\u\9\k\3\9\k\c\0\t\f\u\k\1\d\e\o\k\k\t\m\n\v\t\j\f\m\j\i\i\q\j\m\y\d\z\i\i\n\8\w\c\m\w\c\a\f\q\c\s\k\a\u\e\7\c\t\t ]] 00:07:17.770 08:02:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:17.770 08:02:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:17.770 [2024-06-10 08:02:39.616466] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:17.770 [2024-06-10 08:02:39.616590] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63177 ] 00:07:18.029 [2024-06-10 08:02:39.756928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.029 [2024-06-10 08:02:39.875400] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.305 [2024-06-10 08:02:39.929855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.568  Copying: 512/512 [B] (average 500 kBps) 00:07:18.568 00:07:18.568 08:02:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lhyawg15rzyswo9s9k947396peicmbiibrhmnvquo7z7lrwzk6gr2zpwizsy8zs6vklnptrwhe2psm1k4toy9eowm6j7wn3qz4xiqnaxvx70ge7xqz15j93s4kewsa3iwotza2mjx716afx4ysaj5ylwlwr3mqkk1m05nwsylyhairn11ro9tmbpfletvbo5m8qhfujxx3oh27gzbuay3yrz02utdhc396yon6zbkqokw1szma2djybq4r5lwkcklh9m6fbalmvzattmx2pzlfi07xo2vw39kg26yunlmhgvv4l3gjleoy1ecbrlzjb7x4wzoejb03ztv8e0gpz2onylmkcn1f88lu5hr5dtwxux94qbs93fs2krddia5jtjp6kc0fur5jyxyvul4gpo9hyp0q73rjrb8h3jvsqk5h3gk6djx2uwchomtpvgnabcrnrmfwku9k39kc0tfuk1deokktmnvtjfmjiiqjmydziin8wcmwcafqcskaue7ctt == \l\h\y\a\w\g\1\5\r\z\y\s\w\o\9\s\9\k\9\4\7\3\9\6\p\e\i\c\m\b\i\i\b\r\h\m\n\v\q\u\o\7\z\7\l\r\w\z\k\6\g\r\2\z\p\w\i\z\s\y\8\z\s\6\v\k\l\n\p\t\r\w\h\e\2\p\s\m\1\k\4\t\o\y\9\e\o\w\m\6\j\7\w\n\3\q\z\4\x\i\q\n\a\x\v\x\7\0\g\e\7\x\q\z\1\5\j\9\3\s\4\k\e\w\s\a\3\i\w\o\t\z\a\2\m\j\x\7\1\6\a\f\x\4\y\s\a\j\5\y\l\w\l\w\r\3\m\q\k\k\1\m\0\5\n\w\s\y\l\y\h\a\i\r\n\1\1\r\o\9\t\m\b\p\f\l\e\t\v\b\o\5\m\8\q\h\f\u\j\x\x\3\o\h\2\7\g\z\b\u\a\y\3\y\r\z\0\2\u\t\d\h\c\3\9\6\y\o\n\6\z\b\k\q\o\k\w\1\s\z\m\a\2\d\j\y\b\q\4\r\5\l\w\k\c\k\l\h\9\m\6\f\b\a\l\m\v\z\a\t\t\m\x\2\p\z\l\f\i\0\7\x\o\2\v\w\3\9\k\g\2\6\y\u\n\l\m\h\g\v\v\4\l\3\g\j\l\e\o\y\1\e\c\b\r\l\z\j\b\7\x\4\w\z\o\e\j\b\0\3\z\t\v\8\e\0\g\p\z\2\o\n\y\l\m\k\c\n\1\f\8\8\l\u\5\h\r\5\d\t\w\x\u\x\9\4\q\b\s\9\3\f\s\2\k\r\d\d\i\a\5\j\t\j\p\6\k\c\0\f\u\r\5\j\y\x\y\v\u\l\4\g\p\o\9\h\y\p\0\q\7\3\r\j\r\b\8\h\3\j\v\s\q\k\5\h\3\g\k\6\d\j\x\2\u\w\c\h\o\m\t\p\v\g\n\a\b\c\r\n\r\m\f\w\k\u\9\k\3\9\k\c\0\t\f\u\k\1\d\e\o\k\k\t\m\n\v\t\j\f\m\j\i\i\q\j\m\y\d\z\i\i\n\8\w\c\m\w\c\a\f\q\c\s\k\a\u\e\7\c\t\t ]] 00:07:18.568 08:02:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:18.568 08:02:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:18.568 [2024-06-10 08:02:40.244810] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:18.568 [2024-06-10 08:02:40.244929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63187 ] 00:07:18.568 [2024-06-10 08:02:40.386700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.827 [2024-06-10 08:02:40.506663] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.827 [2024-06-10 08:02:40.564051] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:19.086  Copying: 512/512 [B] (average 125 kBps) 00:07:19.086 00:07:19.086 08:02:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lhyawg15rzyswo9s9k947396peicmbiibrhmnvquo7z7lrwzk6gr2zpwizsy8zs6vklnptrwhe2psm1k4toy9eowm6j7wn3qz4xiqnaxvx70ge7xqz15j93s4kewsa3iwotza2mjx716afx4ysaj5ylwlwr3mqkk1m05nwsylyhairn11ro9tmbpfletvbo5m8qhfujxx3oh27gzbuay3yrz02utdhc396yon6zbkqokw1szma2djybq4r5lwkcklh9m6fbalmvzattmx2pzlfi07xo2vw39kg26yunlmhgvv4l3gjleoy1ecbrlzjb7x4wzoejb03ztv8e0gpz2onylmkcn1f88lu5hr5dtwxux94qbs93fs2krddia5jtjp6kc0fur5jyxyvul4gpo9hyp0q73rjrb8h3jvsqk5h3gk6djx2uwchomtpvgnabcrnrmfwku9k39kc0tfuk1deokktmnvtjfmjiiqjmydziin8wcmwcafqcskaue7ctt == \l\h\y\a\w\g\1\5\r\z\y\s\w\o\9\s\9\k\9\4\7\3\9\6\p\e\i\c\m\b\i\i\b\r\h\m\n\v\q\u\o\7\z\7\l\r\w\z\k\6\g\r\2\z\p\w\i\z\s\y\8\z\s\6\v\k\l\n\p\t\r\w\h\e\2\p\s\m\1\k\4\t\o\y\9\e\o\w\m\6\j\7\w\n\3\q\z\4\x\i\q\n\a\x\v\x\7\0\g\e\7\x\q\z\1\5\j\9\3\s\4\k\e\w\s\a\3\i\w\o\t\z\a\2\m\j\x\7\1\6\a\f\x\4\y\s\a\j\5\y\l\w\l\w\r\3\m\q\k\k\1\m\0\5\n\w\s\y\l\y\h\a\i\r\n\1\1\r\o\9\t\m\b\p\f\l\e\t\v\b\o\5\m\8\q\h\f\u\j\x\x\3\o\h\2\7\g\z\b\u\a\y\3\y\r\z\0\2\u\t\d\h\c\3\9\6\y\o\n\6\z\b\k\q\o\k\w\1\s\z\m\a\2\d\j\y\b\q\4\r\5\l\w\k\c\k\l\h\9\m\6\f\b\a\l\m\v\z\a\t\t\m\x\2\p\z\l\f\i\0\7\x\o\2\v\w\3\9\k\g\2\6\y\u\n\l\m\h\g\v\v\4\l\3\g\j\l\e\o\y\1\e\c\b\r\l\z\j\b\7\x\4\w\z\o\e\j\b\0\3\z\t\v\8\e\0\g\p\z\2\o\n\y\l\m\k\c\n\1\f\8\8\l\u\5\h\r\5\d\t\w\x\u\x\9\4\q\b\s\9\3\f\s\2\k\r\d\d\i\a\5\j\t\j\p\6\k\c\0\f\u\r\5\j\y\x\y\v\u\l\4\g\p\o\9\h\y\p\0\q\7\3\r\j\r\b\8\h\3\j\v\s\q\k\5\h\3\g\k\6\d\j\x\2\u\w\c\h\o\m\t\p\v\g\n\a\b\c\r\n\r\m\f\w\k\u\9\k\3\9\k\c\0\t\f\u\k\1\d\e\o\k\k\t\m\n\v\t\j\f\m\j\i\i\q\j\m\y\d\z\i\i\n\8\w\c\m\w\c\a\f\q\c\s\k\a\u\e\7\c\t\t ]] 00:07:19.086 08:02:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:19.086 08:02:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:19.086 [2024-06-10 08:02:40.876380] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:19.086 [2024-06-10 08:02:40.876520] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63202 ] 00:07:19.346 [2024-06-10 08:02:41.018066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.346 [2024-06-10 08:02:41.140962] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.346 [2024-06-10 08:02:41.199710] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:19.605  Copying: 512/512 [B] (average 250 kBps) 00:07:19.605 00:07:19.605 08:02:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lhyawg15rzyswo9s9k947396peicmbiibrhmnvquo7z7lrwzk6gr2zpwizsy8zs6vklnptrwhe2psm1k4toy9eowm6j7wn3qz4xiqnaxvx70ge7xqz15j93s4kewsa3iwotza2mjx716afx4ysaj5ylwlwr3mqkk1m05nwsylyhairn11ro9tmbpfletvbo5m8qhfujxx3oh27gzbuay3yrz02utdhc396yon6zbkqokw1szma2djybq4r5lwkcklh9m6fbalmvzattmx2pzlfi07xo2vw39kg26yunlmhgvv4l3gjleoy1ecbrlzjb7x4wzoejb03ztv8e0gpz2onylmkcn1f88lu5hr5dtwxux94qbs93fs2krddia5jtjp6kc0fur5jyxyvul4gpo9hyp0q73rjrb8h3jvsqk5h3gk6djx2uwchomtpvgnabcrnrmfwku9k39kc0tfuk1deokktmnvtjfmjiiqjmydziin8wcmwcafqcskaue7ctt == \l\h\y\a\w\g\1\5\r\z\y\s\w\o\9\s\9\k\9\4\7\3\9\6\p\e\i\c\m\b\i\i\b\r\h\m\n\v\q\u\o\7\z\7\l\r\w\z\k\6\g\r\2\z\p\w\i\z\s\y\8\z\s\6\v\k\l\n\p\t\r\w\h\e\2\p\s\m\1\k\4\t\o\y\9\e\o\w\m\6\j\7\w\n\3\q\z\4\x\i\q\n\a\x\v\x\7\0\g\e\7\x\q\z\1\5\j\9\3\s\4\k\e\w\s\a\3\i\w\o\t\z\a\2\m\j\x\7\1\6\a\f\x\4\y\s\a\j\5\y\l\w\l\w\r\3\m\q\k\k\1\m\0\5\n\w\s\y\l\y\h\a\i\r\n\1\1\r\o\9\t\m\b\p\f\l\e\t\v\b\o\5\m\8\q\h\f\u\j\x\x\3\o\h\2\7\g\z\b\u\a\y\3\y\r\z\0\2\u\t\d\h\c\3\9\6\y\o\n\6\z\b\k\q\o\k\w\1\s\z\m\a\2\d\j\y\b\q\4\r\5\l\w\k\c\k\l\h\9\m\6\f\b\a\l\m\v\z\a\t\t\m\x\2\p\z\l\f\i\0\7\x\o\2\v\w\3\9\k\g\2\6\y\u\n\l\m\h\g\v\v\4\l\3\g\j\l\e\o\y\1\e\c\b\r\l\z\j\b\7\x\4\w\z\o\e\j\b\0\3\z\t\v\8\e\0\g\p\z\2\o\n\y\l\m\k\c\n\1\f\8\8\l\u\5\h\r\5\d\t\w\x\u\x\9\4\q\b\s\9\3\f\s\2\k\r\d\d\i\a\5\j\t\j\p\6\k\c\0\f\u\r\5\j\y\x\y\v\u\l\4\g\p\o\9\h\y\p\0\q\7\3\r\j\r\b\8\h\3\j\v\s\q\k\5\h\3\g\k\6\d\j\x\2\u\w\c\h\o\m\t\p\v\g\n\a\b\c\r\n\r\m\f\w\k\u\9\k\3\9\k\c\0\t\f\u\k\1\d\e\o\k\k\t\m\n\v\t\j\f\m\j\i\i\q\j\m\y\d\z\i\i\n\8\w\c\m\w\c\a\f\q\c\s\k\a\u\e\7\c\t\t ]] 00:07:19.605 08:02:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:19.605 08:02:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:19.605 08:02:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:19.605 08:02:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:19.605 08:02:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:19.605 08:02:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:19.864 [2024-06-10 08:02:41.522249] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:19.864 [2024-06-10 08:02:41.522570] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63206 ] 00:07:19.864 [2024-06-10 08:02:41.662389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.123 [2024-06-10 08:02:41.775464] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.123 [2024-06-10 08:02:41.831418] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.382  Copying: 512/512 [B] (average 500 kBps) 00:07:20.382 00:07:20.382 08:02:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3dvpjoiq4w348r9cbaudbgjc7eriqd5pwfix8h39m9kzjwwjh4p43hx3dfoe4p68ypc5gu6fbimyflfen7ypbgrvgkpj3hb8171rar1gcqjvpqyg45wzj8z4pga6i57b30mac1qewm9zr1ggsylfkke4ag6gzxw80oj8nxpavc2hdere2qc66ttt15759oyul7ijonr6ybgz0ombg0jn4rb1pbx5re99fmnegz2jm71no6xu4amkttul6e7y8nkzzpn95rwfvgdwnt6h1yn21oiyns376m83uztyc1ujj942fawamafklnvl99gq4ni3x2a8xono42ofhi4pr1iozpoyc9o7wajrcgs57ac07ng40z20xkwlxnyqd1palrx4p9emm19wn5eemkdmjv3hxlhldft5v0ui8cdq3lj2ipbevzqwqdq3n4al6rdp2tr2vg9irdj9q0wolx30qeaz44pzs73wajq1jc2fvbzh8zjb4jcdke17pjqg7c1gs7hy == \3\d\v\p\j\o\i\q\4\w\3\4\8\r\9\c\b\a\u\d\b\g\j\c\7\e\r\i\q\d\5\p\w\f\i\x\8\h\3\9\m\9\k\z\j\w\w\j\h\4\p\4\3\h\x\3\d\f\o\e\4\p\6\8\y\p\c\5\g\u\6\f\b\i\m\y\f\l\f\e\n\7\y\p\b\g\r\v\g\k\p\j\3\h\b\8\1\7\1\r\a\r\1\g\c\q\j\v\p\q\y\g\4\5\w\z\j\8\z\4\p\g\a\6\i\5\7\b\3\0\m\a\c\1\q\e\w\m\9\z\r\1\g\g\s\y\l\f\k\k\e\4\a\g\6\g\z\x\w\8\0\o\j\8\n\x\p\a\v\c\2\h\d\e\r\e\2\q\c\6\6\t\t\t\1\5\7\5\9\o\y\u\l\7\i\j\o\n\r\6\y\b\g\z\0\o\m\b\g\0\j\n\4\r\b\1\p\b\x\5\r\e\9\9\f\m\n\e\g\z\2\j\m\7\1\n\o\6\x\u\4\a\m\k\t\t\u\l\6\e\7\y\8\n\k\z\z\p\n\9\5\r\w\f\v\g\d\w\n\t\6\h\1\y\n\2\1\o\i\y\n\s\3\7\6\m\8\3\u\z\t\y\c\1\u\j\j\9\4\2\f\a\w\a\m\a\f\k\l\n\v\l\9\9\g\q\4\n\i\3\x\2\a\8\x\o\n\o\4\2\o\f\h\i\4\p\r\1\i\o\z\p\o\y\c\9\o\7\w\a\j\r\c\g\s\5\7\a\c\0\7\n\g\4\0\z\2\0\x\k\w\l\x\n\y\q\d\1\p\a\l\r\x\4\p\9\e\m\m\1\9\w\n\5\e\e\m\k\d\m\j\v\3\h\x\l\h\l\d\f\t\5\v\0\u\i\8\c\d\q\3\l\j\2\i\p\b\e\v\z\q\w\q\d\q\3\n\4\a\l\6\r\d\p\2\t\r\2\v\g\9\i\r\d\j\9\q\0\w\o\l\x\3\0\q\e\a\z\4\4\p\z\s\7\3\w\a\j\q\1\j\c\2\f\v\b\z\h\8\z\j\b\4\j\c\d\k\e\1\7\p\j\q\g\7\c\1\g\s\7\h\y ]] 00:07:20.382 08:02:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:20.382 08:02:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:20.382 [2024-06-10 08:02:42.131706] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:20.382 [2024-06-10 08:02:42.131832] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63221 ] 00:07:20.640 [2024-06-10 08:02:42.269356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.640 [2024-06-10 08:02:42.362491] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.640 [2024-06-10 08:02:42.417214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.899  Copying: 512/512 [B] (average 500 kBps) 00:07:20.899 00:07:20.899 08:02:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3dvpjoiq4w348r9cbaudbgjc7eriqd5pwfix8h39m9kzjwwjh4p43hx3dfoe4p68ypc5gu6fbimyflfen7ypbgrvgkpj3hb8171rar1gcqjvpqyg45wzj8z4pga6i57b30mac1qewm9zr1ggsylfkke4ag6gzxw80oj8nxpavc2hdere2qc66ttt15759oyul7ijonr6ybgz0ombg0jn4rb1pbx5re99fmnegz2jm71no6xu4amkttul6e7y8nkzzpn95rwfvgdwnt6h1yn21oiyns376m83uztyc1ujj942fawamafklnvl99gq4ni3x2a8xono42ofhi4pr1iozpoyc9o7wajrcgs57ac07ng40z20xkwlxnyqd1palrx4p9emm19wn5eemkdmjv3hxlhldft5v0ui8cdq3lj2ipbevzqwqdq3n4al6rdp2tr2vg9irdj9q0wolx30qeaz44pzs73wajq1jc2fvbzh8zjb4jcdke17pjqg7c1gs7hy == \3\d\v\p\j\o\i\q\4\w\3\4\8\r\9\c\b\a\u\d\b\g\j\c\7\e\r\i\q\d\5\p\w\f\i\x\8\h\3\9\m\9\k\z\j\w\w\j\h\4\p\4\3\h\x\3\d\f\o\e\4\p\6\8\y\p\c\5\g\u\6\f\b\i\m\y\f\l\f\e\n\7\y\p\b\g\r\v\g\k\p\j\3\h\b\8\1\7\1\r\a\r\1\g\c\q\j\v\p\q\y\g\4\5\w\z\j\8\z\4\p\g\a\6\i\5\7\b\3\0\m\a\c\1\q\e\w\m\9\z\r\1\g\g\s\y\l\f\k\k\e\4\a\g\6\g\z\x\w\8\0\o\j\8\n\x\p\a\v\c\2\h\d\e\r\e\2\q\c\6\6\t\t\t\1\5\7\5\9\o\y\u\l\7\i\j\o\n\r\6\y\b\g\z\0\o\m\b\g\0\j\n\4\r\b\1\p\b\x\5\r\e\9\9\f\m\n\e\g\z\2\j\m\7\1\n\o\6\x\u\4\a\m\k\t\t\u\l\6\e\7\y\8\n\k\z\z\p\n\9\5\r\w\f\v\g\d\w\n\t\6\h\1\y\n\2\1\o\i\y\n\s\3\7\6\m\8\3\u\z\t\y\c\1\u\j\j\9\4\2\f\a\w\a\m\a\f\k\l\n\v\l\9\9\g\q\4\n\i\3\x\2\a\8\x\o\n\o\4\2\o\f\h\i\4\p\r\1\i\o\z\p\o\y\c\9\o\7\w\a\j\r\c\g\s\5\7\a\c\0\7\n\g\4\0\z\2\0\x\k\w\l\x\n\y\q\d\1\p\a\l\r\x\4\p\9\e\m\m\1\9\w\n\5\e\e\m\k\d\m\j\v\3\h\x\l\h\l\d\f\t\5\v\0\u\i\8\c\d\q\3\l\j\2\i\p\b\e\v\z\q\w\q\d\q\3\n\4\a\l\6\r\d\p\2\t\r\2\v\g\9\i\r\d\j\9\q\0\w\o\l\x\3\0\q\e\a\z\4\4\p\z\s\7\3\w\a\j\q\1\j\c\2\f\v\b\z\h\8\z\j\b\4\j\c\d\k\e\1\7\p\j\q\g\7\c\1\g\s\7\h\y ]] 00:07:20.899 08:02:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:20.899 08:02:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:20.899 [2024-06-10 08:02:42.729539] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:20.899 [2024-06-10 08:02:42.729638] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63230 ] 00:07:21.158 [2024-06-10 08:02:42.868131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.158 [2024-06-10 08:02:42.971587] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.416 [2024-06-10 08:02:43.026530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:21.416  Copying: 512/512 [B] (average 250 kBps) 00:07:21.416 00:07:21.676 08:02:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3dvpjoiq4w348r9cbaudbgjc7eriqd5pwfix8h39m9kzjwwjh4p43hx3dfoe4p68ypc5gu6fbimyflfen7ypbgrvgkpj3hb8171rar1gcqjvpqyg45wzj8z4pga6i57b30mac1qewm9zr1ggsylfkke4ag6gzxw80oj8nxpavc2hdere2qc66ttt15759oyul7ijonr6ybgz0ombg0jn4rb1pbx5re99fmnegz2jm71no6xu4amkttul6e7y8nkzzpn95rwfvgdwnt6h1yn21oiyns376m83uztyc1ujj942fawamafklnvl99gq4ni3x2a8xono42ofhi4pr1iozpoyc9o7wajrcgs57ac07ng40z20xkwlxnyqd1palrx4p9emm19wn5eemkdmjv3hxlhldft5v0ui8cdq3lj2ipbevzqwqdq3n4al6rdp2tr2vg9irdj9q0wolx30qeaz44pzs73wajq1jc2fvbzh8zjb4jcdke17pjqg7c1gs7hy == \3\d\v\p\j\o\i\q\4\w\3\4\8\r\9\c\b\a\u\d\b\g\j\c\7\e\r\i\q\d\5\p\w\f\i\x\8\h\3\9\m\9\k\z\j\w\w\j\h\4\p\4\3\h\x\3\d\f\o\e\4\p\6\8\y\p\c\5\g\u\6\f\b\i\m\y\f\l\f\e\n\7\y\p\b\g\r\v\g\k\p\j\3\h\b\8\1\7\1\r\a\r\1\g\c\q\j\v\p\q\y\g\4\5\w\z\j\8\z\4\p\g\a\6\i\5\7\b\3\0\m\a\c\1\q\e\w\m\9\z\r\1\g\g\s\y\l\f\k\k\e\4\a\g\6\g\z\x\w\8\0\o\j\8\n\x\p\a\v\c\2\h\d\e\r\e\2\q\c\6\6\t\t\t\1\5\7\5\9\o\y\u\l\7\i\j\o\n\r\6\y\b\g\z\0\o\m\b\g\0\j\n\4\r\b\1\p\b\x\5\r\e\9\9\f\m\n\e\g\z\2\j\m\7\1\n\o\6\x\u\4\a\m\k\t\t\u\l\6\e\7\y\8\n\k\z\z\p\n\9\5\r\w\f\v\g\d\w\n\t\6\h\1\y\n\2\1\o\i\y\n\s\3\7\6\m\8\3\u\z\t\y\c\1\u\j\j\9\4\2\f\a\w\a\m\a\f\k\l\n\v\l\9\9\g\q\4\n\i\3\x\2\a\8\x\o\n\o\4\2\o\f\h\i\4\p\r\1\i\o\z\p\o\y\c\9\o\7\w\a\j\r\c\g\s\5\7\a\c\0\7\n\g\4\0\z\2\0\x\k\w\l\x\n\y\q\d\1\p\a\l\r\x\4\p\9\e\m\m\1\9\w\n\5\e\e\m\k\d\m\j\v\3\h\x\l\h\l\d\f\t\5\v\0\u\i\8\c\d\q\3\l\j\2\i\p\b\e\v\z\q\w\q\d\q\3\n\4\a\l\6\r\d\p\2\t\r\2\v\g\9\i\r\d\j\9\q\0\w\o\l\x\3\0\q\e\a\z\4\4\p\z\s\7\3\w\a\j\q\1\j\c\2\f\v\b\z\h\8\z\j\b\4\j\c\d\k\e\1\7\p\j\q\g\7\c\1\g\s\7\h\y ]] 00:07:21.676 08:02:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:21.676 08:02:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:21.676 [2024-06-10 08:02:43.341394] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:21.676 [2024-06-10 08:02:43.341503] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63240 ] 00:07:21.676 [2024-06-10 08:02:43.475337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.935 [2024-06-10 08:02:43.581662] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.935 [2024-06-10 08:02:43.642046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.195  Copying: 512/512 [B] (average 250 kBps) 00:07:22.195 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3dvpjoiq4w348r9cbaudbgjc7eriqd5pwfix8h39m9kzjwwjh4p43hx3dfoe4p68ypc5gu6fbimyflfen7ypbgrvgkpj3hb8171rar1gcqjvpqyg45wzj8z4pga6i57b30mac1qewm9zr1ggsylfkke4ag6gzxw80oj8nxpavc2hdere2qc66ttt15759oyul7ijonr6ybgz0ombg0jn4rb1pbx5re99fmnegz2jm71no6xu4amkttul6e7y8nkzzpn95rwfvgdwnt6h1yn21oiyns376m83uztyc1ujj942fawamafklnvl99gq4ni3x2a8xono42ofhi4pr1iozpoyc9o7wajrcgs57ac07ng40z20xkwlxnyqd1palrx4p9emm19wn5eemkdmjv3hxlhldft5v0ui8cdq3lj2ipbevzqwqdq3n4al6rdp2tr2vg9irdj9q0wolx30qeaz44pzs73wajq1jc2fvbzh8zjb4jcdke17pjqg7c1gs7hy == \3\d\v\p\j\o\i\q\4\w\3\4\8\r\9\c\b\a\u\d\b\g\j\c\7\e\r\i\q\d\5\p\w\f\i\x\8\h\3\9\m\9\k\z\j\w\w\j\h\4\p\4\3\h\x\3\d\f\o\e\4\p\6\8\y\p\c\5\g\u\6\f\b\i\m\y\f\l\f\e\n\7\y\p\b\g\r\v\g\k\p\j\3\h\b\8\1\7\1\r\a\r\1\g\c\q\j\v\p\q\y\g\4\5\w\z\j\8\z\4\p\g\a\6\i\5\7\b\3\0\m\a\c\1\q\e\w\m\9\z\r\1\g\g\s\y\l\f\k\k\e\4\a\g\6\g\z\x\w\8\0\o\j\8\n\x\p\a\v\c\2\h\d\e\r\e\2\q\c\6\6\t\t\t\1\5\7\5\9\o\y\u\l\7\i\j\o\n\r\6\y\b\g\z\0\o\m\b\g\0\j\n\4\r\b\1\p\b\x\5\r\e\9\9\f\m\n\e\g\z\2\j\m\7\1\n\o\6\x\u\4\a\m\k\t\t\u\l\6\e\7\y\8\n\k\z\z\p\n\9\5\r\w\f\v\g\d\w\n\t\6\h\1\y\n\2\1\o\i\y\n\s\3\7\6\m\8\3\u\z\t\y\c\1\u\j\j\9\4\2\f\a\w\a\m\a\f\k\l\n\v\l\9\9\g\q\4\n\i\3\x\2\a\8\x\o\n\o\4\2\o\f\h\i\4\p\r\1\i\o\z\p\o\y\c\9\o\7\w\a\j\r\c\g\s\5\7\a\c\0\7\n\g\4\0\z\2\0\x\k\w\l\x\n\y\q\d\1\p\a\l\r\x\4\p\9\e\m\m\1\9\w\n\5\e\e\m\k\d\m\j\v\3\h\x\l\h\l\d\f\t\5\v\0\u\i\8\c\d\q\3\l\j\2\i\p\b\e\v\z\q\w\q\d\q\3\n\4\a\l\6\r\d\p\2\t\r\2\v\g\9\i\r\d\j\9\q\0\w\o\l\x\3\0\q\e\a\z\4\4\p\z\s\7\3\w\a\j\q\1\j\c\2\f\v\b\z\h\8\z\j\b\4\j\c\d\k\e\1\7\p\j\q\g\7\c\1\g\s\7\h\y ]] 00:07:22.195 00:07:22.195 real 0m4.961s 00:07:22.195 user 0m2.853s 00:07:22.195 sys 0m2.290s 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:22.195 ************************************ 00:07:22.195 END TEST dd_flags_misc 00:07:22.195 ************************************ 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:22.195 * Second test run, disabling liburing, forcing AIO 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:22.195 ************************************ 00:07:22.195 START TEST dd_flag_append_forced_aio 00:07:22.195 ************************************ 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # append 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=ca7wtm99ejxqbs3ueh6exolm8o8g5cxg 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=o2pzr6yq7hshpy150vbld030il630vsu 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s ca7wtm99ejxqbs3ueh6exolm8o8g5cxg 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s o2pzr6yq7hshpy150vbld030il630vsu 00:07:22.195 08:02:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:22.195 [2024-06-10 08:02:44.033814] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:22.195 [2024-06-10 08:02:44.033927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63274 ] 00:07:22.464 [2024-06-10 08:02:44.173317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.464 [2024-06-10 08:02:44.309853] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.734 [2024-06-10 08:02:44.369118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.993  Copying: 32/32 [B] (average 31 kBps) 00:07:22.993 00:07:22.993 08:02:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ o2pzr6yq7hshpy150vbld030il630vsuca7wtm99ejxqbs3ueh6exolm8o8g5cxg == \o\2\p\z\r\6\y\q\7\h\s\h\p\y\1\5\0\v\b\l\d\0\3\0\i\l\6\3\0\v\s\u\c\a\7\w\t\m\9\9\e\j\x\q\b\s\3\u\e\h\6\e\x\o\l\m\8\o\8\g\5\c\x\g ]] 00:07:22.993 00:07:22.993 real 0m0.710s 00:07:22.993 user 0m0.431s 00:07:22.993 sys 0m0.157s 00:07:22.993 08:02:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:22.993 08:02:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:22.993 ************************************ 00:07:22.993 END TEST dd_flag_append_forced_aio 00:07:22.993 ************************************ 00:07:22.993 08:02:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:22.993 08:02:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:22.993 08:02:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:22.993 08:02:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:22.993 ************************************ 00:07:22.993 START TEST dd_flag_directory_forced_aio 00:07:22.993 ************************************ 00:07:22.993 08:02:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # directory 00:07:22.993 08:02:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:22.993 08:02:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@649 -- # local es=0 00:07:22.993 08:02:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:22.993 08:02:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.993 08:02:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:22.993 08:02:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.993 08:02:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:22.993 08:02:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.993 08:02:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:22.993 08:02:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.993 08:02:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:22.993 08:02:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:22.993 [2024-06-10 08:02:44.788502] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:22.993 [2024-06-10 08:02:44.788619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63299 ] 00:07:23.251 [2024-06-10 08:02:44.926958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.251 [2024-06-10 08:02:45.045156] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.251 [2024-06-10 08:02:45.105649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.510 [2024-06-10 08:02:45.139827] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:23.510 [2024-06-10 08:02:45.139934] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:23.510 [2024-06-10 08:02:45.139967] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:23.510 [2024-06-10 08:02:45.262090] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:23.510 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # es=236 00:07:23.510 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:23.510 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # es=108 00:07:23.510 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # case "$es" in 00:07:23.510 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@669 -- # es=1 00:07:23.510 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:23.510 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:23.510 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@649 -- # local es=0 00:07:23.510 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:23.510 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.510 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:23.510 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.510 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:23.510 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.510 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:23.510 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.510 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:23.510 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:23.769 [2024-06-10 08:02:45.416594] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:23.769 [2024-06-10 08:02:45.416690] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63310 ] 00:07:23.769 [2024-06-10 08:02:45.549152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.027 [2024-06-10 08:02:45.664655] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.027 [2024-06-10 08:02:45.721383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:24.027 [2024-06-10 08:02:45.756037] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:24.027 [2024-06-10 08:02:45.756097] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:24.027 [2024-06-10 08:02:45.756156] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.027 [2024-06-10 08:02:45.874931] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:24.285 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # es=236 00:07:24.285 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:24.285 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # es=108 00:07:24.285 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # case "$es" in 00:07:24.285 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@669 -- # es=1 00:07:24.285 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:24.285 00:07:24.285 real 0m1.253s 00:07:24.285 user 0m0.730s 00:07:24.285 sys 0m0.312s 00:07:24.285 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:24.285 08:02:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:24.285 ************************************ 00:07:24.285 END TEST dd_flag_directory_forced_aio 00:07:24.285 ************************************ 00:07:24.285 08:02:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:24.285 08:02:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:24.285 08:02:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:24.285 08:02:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:24.285 ************************************ 00:07:24.285 START TEST dd_flag_nofollow_forced_aio 00:07:24.285 ************************************ 00:07:24.285 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # nofollow 00:07:24.285 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:24.285 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:24.285 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:24.285 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:24.285 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:24.285 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@649 -- # local es=0 00:07:24.285 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:24.285 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.285 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:24.285 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.285 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:24.285 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.285 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:24.285 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.285 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:24.285 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:24.285 [2024-06-10 08:02:46.107618] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:24.285 [2024-06-10 08:02:46.107740] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63344 ] 00:07:24.544 [2024-06-10 08:02:46.250237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.544 [2024-06-10 08:02:46.350677] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.544 [2024-06-10 08:02:46.407699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:24.802 [2024-06-10 08:02:46.441615] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:24.802 [2024-06-10 08:02:46.441679] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:24.802 [2024-06-10 08:02:46.441695] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.802 [2024-06-10 08:02:46.553153] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:24.802 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # es=216 00:07:24.802 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:24.802 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # es=88 00:07:24.802 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # case "$es" in 00:07:24.802 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@669 -- # es=1 00:07:24.802 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:24.802 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:24.802 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@649 -- # local es=0 00:07:24.802 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:24.802 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.802 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:24.802 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.802 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:24.802 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.802 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:24.802 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.802 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:24.802 08:02:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:25.060 [2024-06-10 08:02:46.706623] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:25.060 [2024-06-10 08:02:46.706725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63348 ] 00:07:25.060 [2024-06-10 08:02:46.848667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.319 [2024-06-10 08:02:46.960438] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.319 [2024-06-10 08:02:47.015504] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.319 [2024-06-10 08:02:47.049068] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:25.319 [2024-06-10 08:02:47.049140] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:25.319 [2024-06-10 08:02:47.049170] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.319 [2024-06-10 08:02:47.167585] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:25.577 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # es=216 00:07:25.577 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:25.577 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # es=88 00:07:25.577 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # case "$es" in 00:07:25.577 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@669 -- # es=1 00:07:25.577 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:25.577 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:25.577 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:25.577 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:25.577 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:25.577 [2024-06-10 08:02:47.327422] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:25.577 [2024-06-10 08:02:47.327557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63361 ] 00:07:25.835 [2024-06-10 08:02:47.466004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.835 [2024-06-10 08:02:47.570954] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.835 [2024-06-10 08:02:47.626831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.094  Copying: 512/512 [B] (average 500 kBps) 00:07:26.094 00:07:26.094 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ s4yisknxi1pf64s3wn32tn8fyw9t1m6scpk94ias1q0k44hh94kg4guxlah85j75wwk7ilb9avm4dsfn2e2nay5oi8zxzk53bvo1oe4xj4y6qgoragsq0gin7uek3mghgc8a5ln2l08llknazycfg2kelmz4u04q33xxjpmunfmaasn2akhwsw6b0420972op0evw2qwr5sz0ubrb1xr5kg14hffebc7uc8uwb27njigjw5fej5kx5hocjyzibpc6k339wooqwkra7q1c0zur93eo10tu6d418791x2aexglm6uhrlu1wu6ybi5tcj7x5pcepcgylokbo50xvpojqrkf5x648hyut72lp3rixnutuyr4x3s2dozuzkur57kphrygpqqz7jxtm1dd8ry0caakoqsdf030zxr7j8x220illprscyrqc1n4xbmpt8r304zvoqk9khea6gm53a43cppz17bkbxyh87lyfrr31t1ix6srvyfwxhixrenf2sjc == \s\4\y\i\s\k\n\x\i\1\p\f\6\4\s\3\w\n\3\2\t\n\8\f\y\w\9\t\1\m\6\s\c\p\k\9\4\i\a\s\1\q\0\k\4\4\h\h\9\4\k\g\4\g\u\x\l\a\h\8\5\j\7\5\w\w\k\7\i\l\b\9\a\v\m\4\d\s\f\n\2\e\2\n\a\y\5\o\i\8\z\x\z\k\5\3\b\v\o\1\o\e\4\x\j\4\y\6\q\g\o\r\a\g\s\q\0\g\i\n\7\u\e\k\3\m\g\h\g\c\8\a\5\l\n\2\l\0\8\l\l\k\n\a\z\y\c\f\g\2\k\e\l\m\z\4\u\0\4\q\3\3\x\x\j\p\m\u\n\f\m\a\a\s\n\2\a\k\h\w\s\w\6\b\0\4\2\0\9\7\2\o\p\0\e\v\w\2\q\w\r\5\s\z\0\u\b\r\b\1\x\r\5\k\g\1\4\h\f\f\e\b\c\7\u\c\8\u\w\b\2\7\n\j\i\g\j\w\5\f\e\j\5\k\x\5\h\o\c\j\y\z\i\b\p\c\6\k\3\3\9\w\o\o\q\w\k\r\a\7\q\1\c\0\z\u\r\9\3\e\o\1\0\t\u\6\d\4\1\8\7\9\1\x\2\a\e\x\g\l\m\6\u\h\r\l\u\1\w\u\6\y\b\i\5\t\c\j\7\x\5\p\c\e\p\c\g\y\l\o\k\b\o\5\0\x\v\p\o\j\q\r\k\f\5\x\6\4\8\h\y\u\t\7\2\l\p\3\r\i\x\n\u\t\u\y\r\4\x\3\s\2\d\o\z\u\z\k\u\r\5\7\k\p\h\r\y\g\p\q\q\z\7\j\x\t\m\1\d\d\8\r\y\0\c\a\a\k\o\q\s\d\f\0\3\0\z\x\r\7\j\8\x\2\2\0\i\l\l\p\r\s\c\y\r\q\c\1\n\4\x\b\m\p\t\8\r\3\0\4\z\v\o\q\k\9\k\h\e\a\6\g\m\5\3\a\4\3\c\p\p\z\1\7\b\k\b\x\y\h\8\7\l\y\f\r\r\3\1\t\1\i\x\6\s\r\v\y\f\w\x\h\i\x\r\e\n\f\2\s\j\c ]] 00:07:26.094 00:07:26.094 real 0m1.862s 00:07:26.094 user 0m1.064s 00:07:26.094 sys 0m0.464s 00:07:26.094 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:26.094 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:26.094 ************************************ 00:07:26.094 END TEST dd_flag_nofollow_forced_aio 00:07:26.094 ************************************ 00:07:26.094 08:02:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:26.094 08:02:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:26.094 08:02:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:26.094 08:02:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:26.094 ************************************ 00:07:26.094 START TEST dd_flag_noatime_forced_aio 00:07:26.094 ************************************ 00:07:26.094 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # noatime 00:07:26.094 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:26.094 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:26.094 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:26.094 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:26.094 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:26.094 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:26.094 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1718006567 00:07:26.094 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:26.352 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1718006567 00:07:26.352 08:02:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:27.328 08:02:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:27.328 [2024-06-10 08:02:49.016706] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:27.328 [2024-06-10 08:02:49.016838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63406 ] 00:07:27.328 [2024-06-10 08:02:49.155187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.587 [2024-06-10 08:02:49.263945] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.587 [2024-06-10 08:02:49.320894] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:27.846  Copying: 512/512 [B] (average 500 kBps) 00:07:27.846 00:07:27.846 08:02:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:27.846 08:02:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1718006567 )) 00:07:27.846 08:02:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:27.846 08:02:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1718006567 )) 00:07:27.846 08:02:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:27.846 [2024-06-10 08:02:49.681017] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:27.846 [2024-06-10 08:02:49.681136] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63413 ] 00:07:28.105 [2024-06-10 08:02:49.820997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.105 [2024-06-10 08:02:49.925660] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.364 [2024-06-10 08:02:49.987049] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.623  Copying: 512/512 [B] (average 500 kBps) 00:07:28.623 00:07:28.623 08:02:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.623 08:02:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1718006570 )) 00:07:28.623 00:07:28.623 real 0m2.342s 00:07:28.623 user 0m0.766s 00:07:28.623 sys 0m0.332s 00:07:28.623 08:02:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:28.623 08:02:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:28.623 ************************************ 00:07:28.623 END TEST dd_flag_noatime_forced_aio 00:07:28.623 ************************************ 00:07:28.623 08:02:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:28.623 08:02:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:28.623 08:02:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:28.623 08:02:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:28.623 ************************************ 00:07:28.623 START TEST dd_flags_misc_forced_aio 00:07:28.623 ************************************ 00:07:28.623 08:02:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # io 00:07:28.623 08:02:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:28.623 08:02:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:28.623 08:02:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:28.623 08:02:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:28.623 08:02:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:28.623 08:02:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:28.623 08:02:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:28.623 08:02:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:28.623 08:02:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:28.623 [2024-06-10 08:02:50.398885] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:28.623 [2024-06-10 08:02:50.398995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63445 ] 00:07:28.882 [2024-06-10 08:02:50.533443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.882 [2024-06-10 08:02:50.668133] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.882 [2024-06-10 08:02:50.721444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.141  Copying: 512/512 [B] (average 500 kBps) 00:07:29.141 00:07:29.141 08:02:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jw6ehotqcey2vzokoycxoqn7v22doqenhrppxvnsancq5t1nyj1i5bq4fawot7y0rs2bp3au6pe27jb2a80xg5tew0m0pwbo0tkiw0g6h3vh20gmk2qgvqm66pbgjsi1f3dov40uzirmo7i0ibjgm9nc8xcsbwitrow0l48ezeqwiegjlwg8ukairrvogn8669oxluiemorin8475rvahtvxszhr305ykm5q19xexbd9tikdgsjem6fcxw4328lvhki6k6t97zq4pgt6hrdicrhacgcrfzzw67wsth39kvo66n5ygux29bv7a2askzcdnpeip7bl35iho9tb9cj44ssa4l5qer3cyz4kxq941mm3yawj4y1taxv353vnrhtnnlrjaxwipzk0woj0pn8trwy4da4qsm0jdjq9uw8a3c6r1kkmjhkxlncvu0apqbyzn4ylgsokpktrprr4phimerp7c31vgwaywf50kj9on4cj8r9m3hwo20qunoxvg5bg == \j\w\6\e\h\o\t\q\c\e\y\2\v\z\o\k\o\y\c\x\o\q\n\7\v\2\2\d\o\q\e\n\h\r\p\p\x\v\n\s\a\n\c\q\5\t\1\n\y\j\1\i\5\b\q\4\f\a\w\o\t\7\y\0\r\s\2\b\p\3\a\u\6\p\e\2\7\j\b\2\a\8\0\x\g\5\t\e\w\0\m\0\p\w\b\o\0\t\k\i\w\0\g\6\h\3\v\h\2\0\g\m\k\2\q\g\v\q\m\6\6\p\b\g\j\s\i\1\f\3\d\o\v\4\0\u\z\i\r\m\o\7\i\0\i\b\j\g\m\9\n\c\8\x\c\s\b\w\i\t\r\o\w\0\l\4\8\e\z\e\q\w\i\e\g\j\l\w\g\8\u\k\a\i\r\r\v\o\g\n\8\6\6\9\o\x\l\u\i\e\m\o\r\i\n\8\4\7\5\r\v\a\h\t\v\x\s\z\h\r\3\0\5\y\k\m\5\q\1\9\x\e\x\b\d\9\t\i\k\d\g\s\j\e\m\6\f\c\x\w\4\3\2\8\l\v\h\k\i\6\k\6\t\9\7\z\q\4\p\g\t\6\h\r\d\i\c\r\h\a\c\g\c\r\f\z\z\w\6\7\w\s\t\h\3\9\k\v\o\6\6\n\5\y\g\u\x\2\9\b\v\7\a\2\a\s\k\z\c\d\n\p\e\i\p\7\b\l\3\5\i\h\o\9\t\b\9\c\j\4\4\s\s\a\4\l\5\q\e\r\3\c\y\z\4\k\x\q\9\4\1\m\m\3\y\a\w\j\4\y\1\t\a\x\v\3\5\3\v\n\r\h\t\n\n\l\r\j\a\x\w\i\p\z\k\0\w\o\j\0\p\n\8\t\r\w\y\4\d\a\4\q\s\m\0\j\d\j\q\9\u\w\8\a\3\c\6\r\1\k\k\m\j\h\k\x\l\n\c\v\u\0\a\p\q\b\y\z\n\4\y\l\g\s\o\k\p\k\t\r\p\r\r\4\p\h\i\m\e\r\p\7\c\3\1\v\g\w\a\y\w\f\5\0\k\j\9\o\n\4\c\j\8\r\9\m\3\h\w\o\2\0\q\u\n\o\x\v\g\5\b\g ]] 00:07:29.141 08:02:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:29.141 08:02:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:29.400 [2024-06-10 08:02:51.041771] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:29.400 [2024-06-10 08:02:51.041967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63453 ] 00:07:29.400 [2024-06-10 08:02:51.174396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.659 [2024-06-10 08:02:51.302602] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.659 [2024-06-10 08:02:51.357929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.918  Copying: 512/512 [B] (average 500 kBps) 00:07:29.918 00:07:29.918 08:02:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jw6ehotqcey2vzokoycxoqn7v22doqenhrppxvnsancq5t1nyj1i5bq4fawot7y0rs2bp3au6pe27jb2a80xg5tew0m0pwbo0tkiw0g6h3vh20gmk2qgvqm66pbgjsi1f3dov40uzirmo7i0ibjgm9nc8xcsbwitrow0l48ezeqwiegjlwg8ukairrvogn8669oxluiemorin8475rvahtvxszhr305ykm5q19xexbd9tikdgsjem6fcxw4328lvhki6k6t97zq4pgt6hrdicrhacgcrfzzw67wsth39kvo66n5ygux29bv7a2askzcdnpeip7bl35iho9tb9cj44ssa4l5qer3cyz4kxq941mm3yawj4y1taxv353vnrhtnnlrjaxwipzk0woj0pn8trwy4da4qsm0jdjq9uw8a3c6r1kkmjhkxlncvu0apqbyzn4ylgsokpktrprr4phimerp7c31vgwaywf50kj9on4cj8r9m3hwo20qunoxvg5bg == \j\w\6\e\h\o\t\q\c\e\y\2\v\z\o\k\o\y\c\x\o\q\n\7\v\2\2\d\o\q\e\n\h\r\p\p\x\v\n\s\a\n\c\q\5\t\1\n\y\j\1\i\5\b\q\4\f\a\w\o\t\7\y\0\r\s\2\b\p\3\a\u\6\p\e\2\7\j\b\2\a\8\0\x\g\5\t\e\w\0\m\0\p\w\b\o\0\t\k\i\w\0\g\6\h\3\v\h\2\0\g\m\k\2\q\g\v\q\m\6\6\p\b\g\j\s\i\1\f\3\d\o\v\4\0\u\z\i\r\m\o\7\i\0\i\b\j\g\m\9\n\c\8\x\c\s\b\w\i\t\r\o\w\0\l\4\8\e\z\e\q\w\i\e\g\j\l\w\g\8\u\k\a\i\r\r\v\o\g\n\8\6\6\9\o\x\l\u\i\e\m\o\r\i\n\8\4\7\5\r\v\a\h\t\v\x\s\z\h\r\3\0\5\y\k\m\5\q\1\9\x\e\x\b\d\9\t\i\k\d\g\s\j\e\m\6\f\c\x\w\4\3\2\8\l\v\h\k\i\6\k\6\t\9\7\z\q\4\p\g\t\6\h\r\d\i\c\r\h\a\c\g\c\r\f\z\z\w\6\7\w\s\t\h\3\9\k\v\o\6\6\n\5\y\g\u\x\2\9\b\v\7\a\2\a\s\k\z\c\d\n\p\e\i\p\7\b\l\3\5\i\h\o\9\t\b\9\c\j\4\4\s\s\a\4\l\5\q\e\r\3\c\y\z\4\k\x\q\9\4\1\m\m\3\y\a\w\j\4\y\1\t\a\x\v\3\5\3\v\n\r\h\t\n\n\l\r\j\a\x\w\i\p\z\k\0\w\o\j\0\p\n\8\t\r\w\y\4\d\a\4\q\s\m\0\j\d\j\q\9\u\w\8\a\3\c\6\r\1\k\k\m\j\h\k\x\l\n\c\v\u\0\a\p\q\b\y\z\n\4\y\l\g\s\o\k\p\k\t\r\p\r\r\4\p\h\i\m\e\r\p\7\c\3\1\v\g\w\a\y\w\f\5\0\k\j\9\o\n\4\c\j\8\r\9\m\3\h\w\o\2\0\q\u\n\o\x\v\g\5\b\g ]] 00:07:29.918 08:02:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:29.918 08:02:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:29.918 [2024-06-10 08:02:51.686297] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:29.918 [2024-06-10 08:02:51.686434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63460 ] 00:07:30.177 [2024-06-10 08:02:51.818579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.177 [2024-06-10 08:02:51.924857] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.177 [2024-06-10 08:02:51.985886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:30.435  Copying: 512/512 [B] (average 55 kBps) 00:07:30.435 00:07:30.435 08:02:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jw6ehotqcey2vzokoycxoqn7v22doqenhrppxvnsancq5t1nyj1i5bq4fawot7y0rs2bp3au6pe27jb2a80xg5tew0m0pwbo0tkiw0g6h3vh20gmk2qgvqm66pbgjsi1f3dov40uzirmo7i0ibjgm9nc8xcsbwitrow0l48ezeqwiegjlwg8ukairrvogn8669oxluiemorin8475rvahtvxszhr305ykm5q19xexbd9tikdgsjem6fcxw4328lvhki6k6t97zq4pgt6hrdicrhacgcrfzzw67wsth39kvo66n5ygux29bv7a2askzcdnpeip7bl35iho9tb9cj44ssa4l5qer3cyz4kxq941mm3yawj4y1taxv353vnrhtnnlrjaxwipzk0woj0pn8trwy4da4qsm0jdjq9uw8a3c6r1kkmjhkxlncvu0apqbyzn4ylgsokpktrprr4phimerp7c31vgwaywf50kj9on4cj8r9m3hwo20qunoxvg5bg == \j\w\6\e\h\o\t\q\c\e\y\2\v\z\o\k\o\y\c\x\o\q\n\7\v\2\2\d\o\q\e\n\h\r\p\p\x\v\n\s\a\n\c\q\5\t\1\n\y\j\1\i\5\b\q\4\f\a\w\o\t\7\y\0\r\s\2\b\p\3\a\u\6\p\e\2\7\j\b\2\a\8\0\x\g\5\t\e\w\0\m\0\p\w\b\o\0\t\k\i\w\0\g\6\h\3\v\h\2\0\g\m\k\2\q\g\v\q\m\6\6\p\b\g\j\s\i\1\f\3\d\o\v\4\0\u\z\i\r\m\o\7\i\0\i\b\j\g\m\9\n\c\8\x\c\s\b\w\i\t\r\o\w\0\l\4\8\e\z\e\q\w\i\e\g\j\l\w\g\8\u\k\a\i\r\r\v\o\g\n\8\6\6\9\o\x\l\u\i\e\m\o\r\i\n\8\4\7\5\r\v\a\h\t\v\x\s\z\h\r\3\0\5\y\k\m\5\q\1\9\x\e\x\b\d\9\t\i\k\d\g\s\j\e\m\6\f\c\x\w\4\3\2\8\l\v\h\k\i\6\k\6\t\9\7\z\q\4\p\g\t\6\h\r\d\i\c\r\h\a\c\g\c\r\f\z\z\w\6\7\w\s\t\h\3\9\k\v\o\6\6\n\5\y\g\u\x\2\9\b\v\7\a\2\a\s\k\z\c\d\n\p\e\i\p\7\b\l\3\5\i\h\o\9\t\b\9\c\j\4\4\s\s\a\4\l\5\q\e\r\3\c\y\z\4\k\x\q\9\4\1\m\m\3\y\a\w\j\4\y\1\t\a\x\v\3\5\3\v\n\r\h\t\n\n\l\r\j\a\x\w\i\p\z\k\0\w\o\j\0\p\n\8\t\r\w\y\4\d\a\4\q\s\m\0\j\d\j\q\9\u\w\8\a\3\c\6\r\1\k\k\m\j\h\k\x\l\n\c\v\u\0\a\p\q\b\y\z\n\4\y\l\g\s\o\k\p\k\t\r\p\r\r\4\p\h\i\m\e\r\p\7\c\3\1\v\g\w\a\y\w\f\5\0\k\j\9\o\n\4\c\j\8\r\9\m\3\h\w\o\2\0\q\u\n\o\x\v\g\5\b\g ]] 00:07:30.435 08:02:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:30.435 08:02:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:30.694 [2024-06-10 08:02:52.341735] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:30.694 [2024-06-10 08:02:52.341868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63473 ] 00:07:30.694 [2024-06-10 08:02:52.479209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.953 [2024-06-10 08:02:52.596651] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.953 [2024-06-10 08:02:52.656123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:31.212  Copying: 512/512 [B] (average 500 kBps) 00:07:31.212 00:07:31.213 08:02:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jw6ehotqcey2vzokoycxoqn7v22doqenhrppxvnsancq5t1nyj1i5bq4fawot7y0rs2bp3au6pe27jb2a80xg5tew0m0pwbo0tkiw0g6h3vh20gmk2qgvqm66pbgjsi1f3dov40uzirmo7i0ibjgm9nc8xcsbwitrow0l48ezeqwiegjlwg8ukairrvogn8669oxluiemorin8475rvahtvxszhr305ykm5q19xexbd9tikdgsjem6fcxw4328lvhki6k6t97zq4pgt6hrdicrhacgcrfzzw67wsth39kvo66n5ygux29bv7a2askzcdnpeip7bl35iho9tb9cj44ssa4l5qer3cyz4kxq941mm3yawj4y1taxv353vnrhtnnlrjaxwipzk0woj0pn8trwy4da4qsm0jdjq9uw8a3c6r1kkmjhkxlncvu0apqbyzn4ylgsokpktrprr4phimerp7c31vgwaywf50kj9on4cj8r9m3hwo20qunoxvg5bg == \j\w\6\e\h\o\t\q\c\e\y\2\v\z\o\k\o\y\c\x\o\q\n\7\v\2\2\d\o\q\e\n\h\r\p\p\x\v\n\s\a\n\c\q\5\t\1\n\y\j\1\i\5\b\q\4\f\a\w\o\t\7\y\0\r\s\2\b\p\3\a\u\6\p\e\2\7\j\b\2\a\8\0\x\g\5\t\e\w\0\m\0\p\w\b\o\0\t\k\i\w\0\g\6\h\3\v\h\2\0\g\m\k\2\q\g\v\q\m\6\6\p\b\g\j\s\i\1\f\3\d\o\v\4\0\u\z\i\r\m\o\7\i\0\i\b\j\g\m\9\n\c\8\x\c\s\b\w\i\t\r\o\w\0\l\4\8\e\z\e\q\w\i\e\g\j\l\w\g\8\u\k\a\i\r\r\v\o\g\n\8\6\6\9\o\x\l\u\i\e\m\o\r\i\n\8\4\7\5\r\v\a\h\t\v\x\s\z\h\r\3\0\5\y\k\m\5\q\1\9\x\e\x\b\d\9\t\i\k\d\g\s\j\e\m\6\f\c\x\w\4\3\2\8\l\v\h\k\i\6\k\6\t\9\7\z\q\4\p\g\t\6\h\r\d\i\c\r\h\a\c\g\c\r\f\z\z\w\6\7\w\s\t\h\3\9\k\v\o\6\6\n\5\y\g\u\x\2\9\b\v\7\a\2\a\s\k\z\c\d\n\p\e\i\p\7\b\l\3\5\i\h\o\9\t\b\9\c\j\4\4\s\s\a\4\l\5\q\e\r\3\c\y\z\4\k\x\q\9\4\1\m\m\3\y\a\w\j\4\y\1\t\a\x\v\3\5\3\v\n\r\h\t\n\n\l\r\j\a\x\w\i\p\z\k\0\w\o\j\0\p\n\8\t\r\w\y\4\d\a\4\q\s\m\0\j\d\j\q\9\u\w\8\a\3\c\6\r\1\k\k\m\j\h\k\x\l\n\c\v\u\0\a\p\q\b\y\z\n\4\y\l\g\s\o\k\p\k\t\r\p\r\r\4\p\h\i\m\e\r\p\7\c\3\1\v\g\w\a\y\w\f\5\0\k\j\9\o\n\4\c\j\8\r\9\m\3\h\w\o\2\0\q\u\n\o\x\v\g\5\b\g ]] 00:07:31.213 08:02:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:31.213 08:02:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:31.213 08:02:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:31.213 08:02:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:31.213 08:02:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:31.213 08:02:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:31.213 [2024-06-10 08:02:53.037217] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:31.213 [2024-06-10 08:02:53.037983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63480 ] 00:07:31.472 [2024-06-10 08:02:53.181043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.472 [2024-06-10 08:02:53.316199] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.731 [2024-06-10 08:02:53.375207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:31.990  Copying: 512/512 [B] (average 500 kBps) 00:07:31.990 00:07:31.990 08:02:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ulw33ozjo3ycs4mxnke2wjvti61pbl7y7utg85dt851gtxezl98xfzhvfz2r5qobv70xbjsbv6vjgoqdrc9m6j0gpha71bmy3t9c62q8pqzvx8h1b8iku6txt2m5xjy1gjjelg34fvhl2rjce8snh6j7pwlqfk4wzksirhwh69qgw1hyikzt8wkixz4kzgahrh0p5l1w1rx3zo03td5wtt9f16rrgwf7magckj0ngd1d4c9zia174iszqz6fkqf2dks42yn6mpeibas4z2sj21ymntii8oue9xbhdq1mtfcusiylpjuh3pusxnclxuol3m6svkupzdqfr1yicq5hm1dbwo765990bvgej6z4ae8eyasznvlumud09v2yxkgojra9wakr9cprl6gukofnnnqkerbnvc20gm2ntvg2fbsjok4yg1pfymm8xfm5dqdu8dqvcwgfepdngva23e5q0f6ismitru9lj5ty4s1u03cxair9ewb7fwmq3uqoyleh == \u\l\w\3\3\o\z\j\o\3\y\c\s\4\m\x\n\k\e\2\w\j\v\t\i\6\1\p\b\l\7\y\7\u\t\g\8\5\d\t\8\5\1\g\t\x\e\z\l\9\8\x\f\z\h\v\f\z\2\r\5\q\o\b\v\7\0\x\b\j\s\b\v\6\v\j\g\o\q\d\r\c\9\m\6\j\0\g\p\h\a\7\1\b\m\y\3\t\9\c\6\2\q\8\p\q\z\v\x\8\h\1\b\8\i\k\u\6\t\x\t\2\m\5\x\j\y\1\g\j\j\e\l\g\3\4\f\v\h\l\2\r\j\c\e\8\s\n\h\6\j\7\p\w\l\q\f\k\4\w\z\k\s\i\r\h\w\h\6\9\q\g\w\1\h\y\i\k\z\t\8\w\k\i\x\z\4\k\z\g\a\h\r\h\0\p\5\l\1\w\1\r\x\3\z\o\0\3\t\d\5\w\t\t\9\f\1\6\r\r\g\w\f\7\m\a\g\c\k\j\0\n\g\d\1\d\4\c\9\z\i\a\1\7\4\i\s\z\q\z\6\f\k\q\f\2\d\k\s\4\2\y\n\6\m\p\e\i\b\a\s\4\z\2\s\j\2\1\y\m\n\t\i\i\8\o\u\e\9\x\b\h\d\q\1\m\t\f\c\u\s\i\y\l\p\j\u\h\3\p\u\s\x\n\c\l\x\u\o\l\3\m\6\s\v\k\u\p\z\d\q\f\r\1\y\i\c\q\5\h\m\1\d\b\w\o\7\6\5\9\9\0\b\v\g\e\j\6\z\4\a\e\8\e\y\a\s\z\n\v\l\u\m\u\d\0\9\v\2\y\x\k\g\o\j\r\a\9\w\a\k\r\9\c\p\r\l\6\g\u\k\o\f\n\n\n\q\k\e\r\b\n\v\c\2\0\g\m\2\n\t\v\g\2\f\b\s\j\o\k\4\y\g\1\p\f\y\m\m\8\x\f\m\5\d\q\d\u\8\d\q\v\c\w\g\f\e\p\d\n\g\v\a\2\3\e\5\q\0\f\6\i\s\m\i\t\r\u\9\l\j\5\t\y\4\s\1\u\0\3\c\x\a\i\r\9\e\w\b\7\f\w\m\q\3\u\q\o\y\l\e\h ]] 00:07:31.990 08:02:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:31.990 08:02:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:31.990 [2024-06-10 08:02:53.734552] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:31.990 [2024-06-10 08:02:53.734681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63488 ] 00:07:32.249 [2024-06-10 08:02:53.881463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.249 [2024-06-10 08:02:54.002107] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.249 [2024-06-10 08:02:54.056634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:32.508  Copying: 512/512 [B] (average 500 kBps) 00:07:32.508 00:07:32.509 08:02:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ulw33ozjo3ycs4mxnke2wjvti61pbl7y7utg85dt851gtxezl98xfzhvfz2r5qobv70xbjsbv6vjgoqdrc9m6j0gpha71bmy3t9c62q8pqzvx8h1b8iku6txt2m5xjy1gjjelg34fvhl2rjce8snh6j7pwlqfk4wzksirhwh69qgw1hyikzt8wkixz4kzgahrh0p5l1w1rx3zo03td5wtt9f16rrgwf7magckj0ngd1d4c9zia174iszqz6fkqf2dks42yn6mpeibas4z2sj21ymntii8oue9xbhdq1mtfcusiylpjuh3pusxnclxuol3m6svkupzdqfr1yicq5hm1dbwo765990bvgej6z4ae8eyasznvlumud09v2yxkgojra9wakr9cprl6gukofnnnqkerbnvc20gm2ntvg2fbsjok4yg1pfymm8xfm5dqdu8dqvcwgfepdngva23e5q0f6ismitru9lj5ty4s1u03cxair9ewb7fwmq3uqoyleh == \u\l\w\3\3\o\z\j\o\3\y\c\s\4\m\x\n\k\e\2\w\j\v\t\i\6\1\p\b\l\7\y\7\u\t\g\8\5\d\t\8\5\1\g\t\x\e\z\l\9\8\x\f\z\h\v\f\z\2\r\5\q\o\b\v\7\0\x\b\j\s\b\v\6\v\j\g\o\q\d\r\c\9\m\6\j\0\g\p\h\a\7\1\b\m\y\3\t\9\c\6\2\q\8\p\q\z\v\x\8\h\1\b\8\i\k\u\6\t\x\t\2\m\5\x\j\y\1\g\j\j\e\l\g\3\4\f\v\h\l\2\r\j\c\e\8\s\n\h\6\j\7\p\w\l\q\f\k\4\w\z\k\s\i\r\h\w\h\6\9\q\g\w\1\h\y\i\k\z\t\8\w\k\i\x\z\4\k\z\g\a\h\r\h\0\p\5\l\1\w\1\r\x\3\z\o\0\3\t\d\5\w\t\t\9\f\1\6\r\r\g\w\f\7\m\a\g\c\k\j\0\n\g\d\1\d\4\c\9\z\i\a\1\7\4\i\s\z\q\z\6\f\k\q\f\2\d\k\s\4\2\y\n\6\m\p\e\i\b\a\s\4\z\2\s\j\2\1\y\m\n\t\i\i\8\o\u\e\9\x\b\h\d\q\1\m\t\f\c\u\s\i\y\l\p\j\u\h\3\p\u\s\x\n\c\l\x\u\o\l\3\m\6\s\v\k\u\p\z\d\q\f\r\1\y\i\c\q\5\h\m\1\d\b\w\o\7\6\5\9\9\0\b\v\g\e\j\6\z\4\a\e\8\e\y\a\s\z\n\v\l\u\m\u\d\0\9\v\2\y\x\k\g\o\j\r\a\9\w\a\k\r\9\c\p\r\l\6\g\u\k\o\f\n\n\n\q\k\e\r\b\n\v\c\2\0\g\m\2\n\t\v\g\2\f\b\s\j\o\k\4\y\g\1\p\f\y\m\m\8\x\f\m\5\d\q\d\u\8\d\q\v\c\w\g\f\e\p\d\n\g\v\a\2\3\e\5\q\0\f\6\i\s\m\i\t\r\u\9\l\j\5\t\y\4\s\1\u\0\3\c\x\a\i\r\9\e\w\b\7\f\w\m\q\3\u\q\o\y\l\e\h ]] 00:07:32.509 08:02:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:32.509 08:02:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:32.768 [2024-06-10 08:02:54.378197] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:32.768 [2024-06-10 08:02:54.378288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63501 ] 00:07:32.768 [2024-06-10 08:02:54.510252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.768 [2024-06-10 08:02:54.631707] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.027 [2024-06-10 08:02:54.689813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:33.285  Copying: 512/512 [B] (average 500 kBps) 00:07:33.285 00:07:33.285 08:02:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ulw33ozjo3ycs4mxnke2wjvti61pbl7y7utg85dt851gtxezl98xfzhvfz2r5qobv70xbjsbv6vjgoqdrc9m6j0gpha71bmy3t9c62q8pqzvx8h1b8iku6txt2m5xjy1gjjelg34fvhl2rjce8snh6j7pwlqfk4wzksirhwh69qgw1hyikzt8wkixz4kzgahrh0p5l1w1rx3zo03td5wtt9f16rrgwf7magckj0ngd1d4c9zia174iszqz6fkqf2dks42yn6mpeibas4z2sj21ymntii8oue9xbhdq1mtfcusiylpjuh3pusxnclxuol3m6svkupzdqfr1yicq5hm1dbwo765990bvgej6z4ae8eyasznvlumud09v2yxkgojra9wakr9cprl6gukofnnnqkerbnvc20gm2ntvg2fbsjok4yg1pfymm8xfm5dqdu8dqvcwgfepdngva23e5q0f6ismitru9lj5ty4s1u03cxair9ewb7fwmq3uqoyleh == \u\l\w\3\3\o\z\j\o\3\y\c\s\4\m\x\n\k\e\2\w\j\v\t\i\6\1\p\b\l\7\y\7\u\t\g\8\5\d\t\8\5\1\g\t\x\e\z\l\9\8\x\f\z\h\v\f\z\2\r\5\q\o\b\v\7\0\x\b\j\s\b\v\6\v\j\g\o\q\d\r\c\9\m\6\j\0\g\p\h\a\7\1\b\m\y\3\t\9\c\6\2\q\8\p\q\z\v\x\8\h\1\b\8\i\k\u\6\t\x\t\2\m\5\x\j\y\1\g\j\j\e\l\g\3\4\f\v\h\l\2\r\j\c\e\8\s\n\h\6\j\7\p\w\l\q\f\k\4\w\z\k\s\i\r\h\w\h\6\9\q\g\w\1\h\y\i\k\z\t\8\w\k\i\x\z\4\k\z\g\a\h\r\h\0\p\5\l\1\w\1\r\x\3\z\o\0\3\t\d\5\w\t\t\9\f\1\6\r\r\g\w\f\7\m\a\g\c\k\j\0\n\g\d\1\d\4\c\9\z\i\a\1\7\4\i\s\z\q\z\6\f\k\q\f\2\d\k\s\4\2\y\n\6\m\p\e\i\b\a\s\4\z\2\s\j\2\1\y\m\n\t\i\i\8\o\u\e\9\x\b\h\d\q\1\m\t\f\c\u\s\i\y\l\p\j\u\h\3\p\u\s\x\n\c\l\x\u\o\l\3\m\6\s\v\k\u\p\z\d\q\f\r\1\y\i\c\q\5\h\m\1\d\b\w\o\7\6\5\9\9\0\b\v\g\e\j\6\z\4\a\e\8\e\y\a\s\z\n\v\l\u\m\u\d\0\9\v\2\y\x\k\g\o\j\r\a\9\w\a\k\r\9\c\p\r\l\6\g\u\k\o\f\n\n\n\q\k\e\r\b\n\v\c\2\0\g\m\2\n\t\v\g\2\f\b\s\j\o\k\4\y\g\1\p\f\y\m\m\8\x\f\m\5\d\q\d\u\8\d\q\v\c\w\g\f\e\p\d\n\g\v\a\2\3\e\5\q\0\f\6\i\s\m\i\t\r\u\9\l\j\5\t\y\4\s\1\u\0\3\c\x\a\i\r\9\e\w\b\7\f\w\m\q\3\u\q\o\y\l\e\h ]] 00:07:33.285 08:02:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:33.285 08:02:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:33.285 [2024-06-10 08:02:55.024126] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:33.285 [2024-06-10 08:02:55.024228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63503 ] 00:07:33.544 [2024-06-10 08:02:55.160499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.544 [2024-06-10 08:02:55.271084] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.544 [2024-06-10 08:02:55.325500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:33.802  Copying: 512/512 [B] (average 500 kBps) 00:07:33.802 00:07:33.803 08:02:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ulw33ozjo3ycs4mxnke2wjvti61pbl7y7utg85dt851gtxezl98xfzhvfz2r5qobv70xbjsbv6vjgoqdrc9m6j0gpha71bmy3t9c62q8pqzvx8h1b8iku6txt2m5xjy1gjjelg34fvhl2rjce8snh6j7pwlqfk4wzksirhwh69qgw1hyikzt8wkixz4kzgahrh0p5l1w1rx3zo03td5wtt9f16rrgwf7magckj0ngd1d4c9zia174iszqz6fkqf2dks42yn6mpeibas4z2sj21ymntii8oue9xbhdq1mtfcusiylpjuh3pusxnclxuol3m6svkupzdqfr1yicq5hm1dbwo765990bvgej6z4ae8eyasznvlumud09v2yxkgojra9wakr9cprl6gukofnnnqkerbnvc20gm2ntvg2fbsjok4yg1pfymm8xfm5dqdu8dqvcwgfepdngva23e5q0f6ismitru9lj5ty4s1u03cxair9ewb7fwmq3uqoyleh == \u\l\w\3\3\o\z\j\o\3\y\c\s\4\m\x\n\k\e\2\w\j\v\t\i\6\1\p\b\l\7\y\7\u\t\g\8\5\d\t\8\5\1\g\t\x\e\z\l\9\8\x\f\z\h\v\f\z\2\r\5\q\o\b\v\7\0\x\b\j\s\b\v\6\v\j\g\o\q\d\r\c\9\m\6\j\0\g\p\h\a\7\1\b\m\y\3\t\9\c\6\2\q\8\p\q\z\v\x\8\h\1\b\8\i\k\u\6\t\x\t\2\m\5\x\j\y\1\g\j\j\e\l\g\3\4\f\v\h\l\2\r\j\c\e\8\s\n\h\6\j\7\p\w\l\q\f\k\4\w\z\k\s\i\r\h\w\h\6\9\q\g\w\1\h\y\i\k\z\t\8\w\k\i\x\z\4\k\z\g\a\h\r\h\0\p\5\l\1\w\1\r\x\3\z\o\0\3\t\d\5\w\t\t\9\f\1\6\r\r\g\w\f\7\m\a\g\c\k\j\0\n\g\d\1\d\4\c\9\z\i\a\1\7\4\i\s\z\q\z\6\f\k\q\f\2\d\k\s\4\2\y\n\6\m\p\e\i\b\a\s\4\z\2\s\j\2\1\y\m\n\t\i\i\8\o\u\e\9\x\b\h\d\q\1\m\t\f\c\u\s\i\y\l\p\j\u\h\3\p\u\s\x\n\c\l\x\u\o\l\3\m\6\s\v\k\u\p\z\d\q\f\r\1\y\i\c\q\5\h\m\1\d\b\w\o\7\6\5\9\9\0\b\v\g\e\j\6\z\4\a\e\8\e\y\a\s\z\n\v\l\u\m\u\d\0\9\v\2\y\x\k\g\o\j\r\a\9\w\a\k\r\9\c\p\r\l\6\g\u\k\o\f\n\n\n\q\k\e\r\b\n\v\c\2\0\g\m\2\n\t\v\g\2\f\b\s\j\o\k\4\y\g\1\p\f\y\m\m\8\x\f\m\5\d\q\d\u\8\d\q\v\c\w\g\f\e\p\d\n\g\v\a\2\3\e\5\q\0\f\6\i\s\m\i\t\r\u\9\l\j\5\t\y\4\s\1\u\0\3\c\x\a\i\r\9\e\w\b\7\f\w\m\q\3\u\q\o\y\l\e\h ]] 00:07:33.803 00:07:33.803 real 0m5.263s 00:07:33.803 user 0m3.046s 00:07:33.803 sys 0m1.223s 00:07:33.803 08:02:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:33.803 08:02:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:33.803 ************************************ 00:07:33.803 END TEST dd_flags_misc_forced_aio 00:07:33.803 ************************************ 00:07:33.803 08:02:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:33.803 08:02:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:33.803 08:02:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:33.803 00:07:33.803 real 0m22.908s 00:07:33.803 user 0m11.923s 00:07:33.803 sys 0m6.872s 00:07:33.803 08:02:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:33.803 08:02:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:33.803 ************************************ 00:07:33.803 END TEST spdk_dd_posix 00:07:33.803 ************************************ 00:07:34.061 08:02:55 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:34.061 08:02:55 spdk_dd -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:34.061 08:02:55 spdk_dd -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:34.061 08:02:55 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:34.061 ************************************ 00:07:34.061 START TEST spdk_dd_malloc 00:07:34.061 ************************************ 00:07:34.061 08:02:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:34.061 * Looking for test storage... 00:07:34.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:34.061 08:02:55 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:34.061 08:02:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.061 08:02:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.061 08:02:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.061 08:02:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.061 08:02:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.061 08:02:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.061 08:02:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:34.062 08:02:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.062 08:02:55 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:34.062 08:02:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:34.062 08:02:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:34.062 08:02:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:34.062 ************************************ 00:07:34.062 START TEST dd_malloc_copy 00:07:34.062 ************************************ 00:07:34.062 08:02:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # malloc_copy 00:07:34.062 08:02:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:34.062 08:02:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:34.062 08:02:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:34.062 08:02:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:34.062 08:02:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:34.062 08:02:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:34.062 08:02:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:34.062 08:02:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:34.062 08:02:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:34.062 08:02:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:34.062 [2024-06-10 08:02:55.870737] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:34.062 [2024-06-10 08:02:55.870934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63577 ] 00:07:34.062 { 00:07:34.062 "subsystems": [ 00:07:34.062 { 00:07:34.062 "subsystem": "bdev", 00:07:34.062 "config": [ 00:07:34.062 { 00:07:34.062 "params": { 00:07:34.062 "block_size": 512, 00:07:34.062 "num_blocks": 1048576, 00:07:34.062 "name": "malloc0" 00:07:34.062 }, 00:07:34.062 "method": "bdev_malloc_create" 00:07:34.062 }, 00:07:34.062 { 00:07:34.062 "params": { 00:07:34.062 "block_size": 512, 00:07:34.062 "num_blocks": 1048576, 00:07:34.062 "name": "malloc1" 00:07:34.062 }, 00:07:34.062 "method": "bdev_malloc_create" 00:07:34.062 }, 00:07:34.062 { 00:07:34.062 "method": "bdev_wait_for_examine" 00:07:34.062 } 00:07:34.062 ] 00:07:34.062 } 00:07:34.062 ] 00:07:34.062 } 00:07:34.320 [2024-06-10 08:02:56.020967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.320 [2024-06-10 08:02:56.137760] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.579 [2024-06-10 08:02:56.191950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:38.032  Copying: 202/512 [MB] (202 MBps) Copying: 398/512 [MB] (195 MBps) Copying: 512/512 [MB] (average 198 MBps) 00:07:38.032 00:07:38.032 08:02:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:38.032 08:02:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:38.032 08:02:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:38.032 08:02:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:38.032 [2024-06-10 08:02:59.854745] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:38.032 [2024-06-10 08:02:59.854882] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63630 ] 00:07:38.032 { 00:07:38.032 "subsystems": [ 00:07:38.032 { 00:07:38.032 "subsystem": "bdev", 00:07:38.032 "config": [ 00:07:38.032 { 00:07:38.032 "params": { 00:07:38.032 "block_size": 512, 00:07:38.032 "num_blocks": 1048576, 00:07:38.032 "name": "malloc0" 00:07:38.032 }, 00:07:38.032 "method": "bdev_malloc_create" 00:07:38.032 }, 00:07:38.032 { 00:07:38.032 "params": { 00:07:38.032 "block_size": 512, 00:07:38.032 "num_blocks": 1048576, 00:07:38.032 "name": "malloc1" 00:07:38.032 }, 00:07:38.032 "method": "bdev_malloc_create" 00:07:38.032 }, 00:07:38.032 { 00:07:38.032 "method": "bdev_wait_for_examine" 00:07:38.032 } 00:07:38.032 ] 00:07:38.032 } 00:07:38.032 ] 00:07:38.032 } 00:07:38.291 [2024-06-10 08:02:59.989857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.291 [2024-06-10 08:03:00.107600] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.550 [2024-06-10 08:03:00.165480] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:41.995  Copying: 200/512 [MB] (200 MBps) Copying: 406/512 [MB] (206 MBps) Copying: 512/512 [MB] (average 203 MBps) 00:07:41.995 00:07:41.995 00:07:41.995 real 0m7.884s 00:07:41.995 user 0m6.826s 00:07:41.995 sys 0m0.912s 00:07:41.995 08:03:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:41.995 08:03:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:41.995 ************************************ 00:07:41.995 END TEST dd_malloc_copy 00:07:41.995 ************************************ 00:07:41.995 00:07:41.995 real 0m8.023s 00:07:41.995 user 0m6.879s 00:07:41.995 sys 0m1.000s 00:07:41.995 08:03:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:41.995 08:03:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:41.995 ************************************ 00:07:41.995 END TEST spdk_dd_malloc 00:07:41.995 ************************************ 00:07:41.995 08:03:03 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:41.995 08:03:03 spdk_dd -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:41.995 08:03:03 spdk_dd -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:41.995 08:03:03 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:41.995 ************************************ 00:07:41.995 START TEST spdk_dd_bdev_to_bdev 00:07:41.995 ************************************ 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:41.995 * Looking for test storage... 00:07:41.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:41.995 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.996 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:41.996 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:41.996 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:41.996 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:41.996 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:41.996 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:41.996 ************************************ 00:07:41.996 START TEST dd_inflate_file 00:07:41.996 ************************************ 00:07:42.254 08:03:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:42.254 [2024-06-10 08:03:03.913664] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:42.254 [2024-06-10 08:03:03.913821] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63739 ] 00:07:42.254 [2024-06-10 08:03:04.054055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.513 [2024-06-10 08:03:04.163989] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.513 [2024-06-10 08:03:04.221933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:42.772  Copying: 64/64 [MB] (average 1684 MBps) 00:07:42.772 00:07:42.772 00:07:42.772 real 0m0.648s 00:07:42.772 user 0m0.388s 00:07:42.772 sys 0m0.309s 00:07:42.772 ************************************ 00:07:42.772 END TEST dd_inflate_file 00:07:42.772 ************************************ 00:07:42.772 08:03:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:42.772 08:03:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:42.772 08:03:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:42.772 08:03:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:42.772 08:03:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:42.772 08:03:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:42.772 08:03:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:07:42.772 08:03:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:42.772 08:03:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:42.772 08:03:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:42.772 08:03:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:42.772 ************************************ 00:07:42.772 START TEST dd_copy_to_out_bdev 00:07:42.772 ************************************ 00:07:42.772 08:03:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:42.772 { 00:07:42.772 "subsystems": [ 00:07:42.772 { 00:07:42.772 "subsystem": "bdev", 00:07:42.772 "config": [ 00:07:42.772 { 00:07:42.772 "params": { 00:07:42.772 "trtype": "pcie", 00:07:42.772 "traddr": "0000:00:10.0", 00:07:42.772 "name": "Nvme0" 00:07:42.772 }, 00:07:42.772 "method": "bdev_nvme_attach_controller" 00:07:42.772 }, 00:07:42.772 { 00:07:42.772 "params": { 00:07:42.772 "trtype": "pcie", 00:07:42.772 "traddr": "0000:00:11.0", 00:07:42.772 "name": "Nvme1" 00:07:42.772 }, 00:07:42.772 "method": "bdev_nvme_attach_controller" 00:07:42.772 }, 00:07:42.772 { 00:07:42.772 "method": "bdev_wait_for_examine" 00:07:42.772 } 00:07:42.772 ] 00:07:42.772 } 00:07:42.772 ] 00:07:42.772 } 00:07:42.772 [2024-06-10 08:03:04.627070] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:42.772 [2024-06-10 08:03:04.627178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63774 ] 00:07:43.031 [2024-06-10 08:03:04.767120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.031 [2024-06-10 08:03:04.874121] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.290 [2024-06-10 08:03:04.933423] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:44.664  Copying: 58/64 [MB] (58 MBps) Copying: 64/64 [MB] (average 58 MBps) 00:07:44.664 00:07:44.664 00:07:44.664 real 0m1.932s 00:07:44.664 user 0m1.685s 00:07:44.664 sys 0m1.483s 00:07:44.665 08:03:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:44.665 ************************************ 00:07:44.665 08:03:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:44.665 END TEST dd_copy_to_out_bdev 00:07:44.665 ************************************ 00:07:44.924 08:03:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:44.924 08:03:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:44.924 08:03:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:44.924 08:03:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:44.924 08:03:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:44.924 ************************************ 00:07:44.924 START TEST dd_offset_magic 00:07:44.924 ************************************ 00:07:44.924 08:03:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # offset_magic 00:07:44.924 08:03:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:44.924 08:03:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:44.924 08:03:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:44.924 08:03:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:44.924 08:03:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:44.924 08:03:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:44.924 08:03:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:44.924 08:03:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:44.924 [2024-06-10 08:03:06.610071] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:44.924 [2024-06-10 08:03:06.610198] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63819 ] 00:07:44.924 { 00:07:44.924 "subsystems": [ 00:07:44.924 { 00:07:44.924 "subsystem": "bdev", 00:07:44.924 "config": [ 00:07:44.924 { 00:07:44.924 "params": { 00:07:44.924 "trtype": "pcie", 00:07:44.924 "traddr": "0000:00:10.0", 00:07:44.924 "name": "Nvme0" 00:07:44.924 }, 00:07:44.924 "method": "bdev_nvme_attach_controller" 00:07:44.924 }, 00:07:44.924 { 00:07:44.924 "params": { 00:07:44.924 "trtype": "pcie", 00:07:44.924 "traddr": "0000:00:11.0", 00:07:44.924 "name": "Nvme1" 00:07:44.924 }, 00:07:44.924 "method": "bdev_nvme_attach_controller" 00:07:44.924 }, 00:07:44.924 { 00:07:44.924 "method": "bdev_wait_for_examine" 00:07:44.924 } 00:07:44.924 ] 00:07:44.924 } 00:07:44.924 ] 00:07:44.924 } 00:07:44.924 [2024-06-10 08:03:06.750538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.183 [2024-06-10 08:03:06.862526] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.183 [2024-06-10 08:03:06.919303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.730  Copying: 65/65 [MB] (average 984 MBps) 00:07:45.730 00:07:45.730 08:03:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:45.730 08:03:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:45.730 08:03:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:45.730 08:03:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:45.730 [2024-06-10 08:03:07.497391] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:45.730 [2024-06-10 08:03:07.497519] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63839 ] 00:07:45.730 { 00:07:45.730 "subsystems": [ 00:07:45.730 { 00:07:45.730 "subsystem": "bdev", 00:07:45.730 "config": [ 00:07:45.730 { 00:07:45.730 "params": { 00:07:45.730 "trtype": "pcie", 00:07:45.730 "traddr": "0000:00:10.0", 00:07:45.730 "name": "Nvme0" 00:07:45.730 }, 00:07:45.730 "method": "bdev_nvme_attach_controller" 00:07:45.730 }, 00:07:45.730 { 00:07:45.730 "params": { 00:07:45.730 "trtype": "pcie", 00:07:45.730 "traddr": "0000:00:11.0", 00:07:45.730 "name": "Nvme1" 00:07:45.730 }, 00:07:45.730 "method": "bdev_nvme_attach_controller" 00:07:45.730 }, 00:07:45.730 { 00:07:45.730 "method": "bdev_wait_for_examine" 00:07:45.730 } 00:07:45.730 ] 00:07:45.730 } 00:07:45.730 ] 00:07:45.730 } 00:07:45.990 [2024-06-10 08:03:07.637889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.990 [2024-06-10 08:03:07.743599] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.990 [2024-06-10 08:03:07.801284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:46.508  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:46.508 00:07:46.508 08:03:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:46.508 08:03:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:46.508 08:03:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:46.508 08:03:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:46.508 08:03:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:46.508 08:03:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:46.508 08:03:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:46.508 [2024-06-10 08:03:08.248913] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:46.508 [2024-06-10 08:03:08.249040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63855 ] 00:07:46.508 { 00:07:46.508 "subsystems": [ 00:07:46.508 { 00:07:46.508 "subsystem": "bdev", 00:07:46.508 "config": [ 00:07:46.508 { 00:07:46.508 "params": { 00:07:46.508 "trtype": "pcie", 00:07:46.508 "traddr": "0000:00:10.0", 00:07:46.508 "name": "Nvme0" 00:07:46.508 }, 00:07:46.508 "method": "bdev_nvme_attach_controller" 00:07:46.508 }, 00:07:46.508 { 00:07:46.508 "params": { 00:07:46.508 "trtype": "pcie", 00:07:46.508 "traddr": "0000:00:11.0", 00:07:46.508 "name": "Nvme1" 00:07:46.508 }, 00:07:46.508 "method": "bdev_nvme_attach_controller" 00:07:46.508 }, 00:07:46.508 { 00:07:46.508 "method": "bdev_wait_for_examine" 00:07:46.508 } 00:07:46.508 ] 00:07:46.508 } 00:07:46.508 ] 00:07:46.508 } 00:07:46.767 [2024-06-10 08:03:08.388340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.767 [2024-06-10 08:03:08.500258] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.767 [2024-06-10 08:03:08.556834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:47.285  Copying: 65/65 [MB] (average 1065 MBps) 00:07:47.285 00:07:47.285 08:03:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:47.285 08:03:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:47.285 08:03:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:47.285 08:03:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:47.285 [2024-06-10 08:03:09.114228] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:47.285 [2024-06-10 08:03:09.114987] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63870 ] 00:07:47.285 { 00:07:47.285 "subsystems": [ 00:07:47.285 { 00:07:47.285 "subsystem": "bdev", 00:07:47.285 "config": [ 00:07:47.285 { 00:07:47.285 "params": { 00:07:47.285 "trtype": "pcie", 00:07:47.285 "traddr": "0000:00:10.0", 00:07:47.285 "name": "Nvme0" 00:07:47.285 }, 00:07:47.285 "method": "bdev_nvme_attach_controller" 00:07:47.285 }, 00:07:47.285 { 00:07:47.285 "params": { 00:07:47.285 "trtype": "pcie", 00:07:47.285 "traddr": "0000:00:11.0", 00:07:47.285 "name": "Nvme1" 00:07:47.285 }, 00:07:47.285 "method": "bdev_nvme_attach_controller" 00:07:47.285 }, 00:07:47.285 { 00:07:47.285 "method": "bdev_wait_for_examine" 00:07:47.285 } 00:07:47.285 ] 00:07:47.285 } 00:07:47.285 ] 00:07:47.285 } 00:07:47.545 [2024-06-10 08:03:09.261666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.545 [2024-06-10 08:03:09.383831] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.804 [2024-06-10 08:03:09.443161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:48.062  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:48.063 00:07:48.063 08:03:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:48.063 08:03:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:48.063 00:07:48.063 real 0m3.283s 00:07:48.063 user 0m2.385s 00:07:48.063 sys 0m0.979s 00:07:48.063 08:03:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:48.063 08:03:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:48.063 ************************************ 00:07:48.063 END TEST dd_offset_magic 00:07:48.063 ************************************ 00:07:48.063 08:03:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:48.063 08:03:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:48.063 08:03:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:48.063 08:03:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:48.063 08:03:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:48.063 08:03:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:48.063 08:03:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:48.063 08:03:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:48.063 08:03:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:48.063 08:03:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:48.063 08:03:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:48.321 [2024-06-10 08:03:09.935535] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:48.321 [2024-06-10 08:03:09.935644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63908 ] 00:07:48.321 { 00:07:48.321 "subsystems": [ 00:07:48.321 { 00:07:48.321 "subsystem": "bdev", 00:07:48.321 "config": [ 00:07:48.321 { 00:07:48.321 "params": { 00:07:48.321 "trtype": "pcie", 00:07:48.321 "traddr": "0000:00:10.0", 00:07:48.321 "name": "Nvme0" 00:07:48.321 }, 00:07:48.321 "method": "bdev_nvme_attach_controller" 00:07:48.321 }, 00:07:48.322 { 00:07:48.322 "params": { 00:07:48.322 "trtype": "pcie", 00:07:48.322 "traddr": "0000:00:11.0", 00:07:48.322 "name": "Nvme1" 00:07:48.322 }, 00:07:48.322 "method": "bdev_nvme_attach_controller" 00:07:48.322 }, 00:07:48.322 { 00:07:48.322 "method": "bdev_wait_for_examine" 00:07:48.322 } 00:07:48.322 ] 00:07:48.322 } 00:07:48.322 ] 00:07:48.322 } 00:07:48.322 [2024-06-10 08:03:10.073970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.322 [2024-06-10 08:03:10.180199] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.581 [2024-06-10 08:03:10.240274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:48.845  Copying: 5120/5120 [kB] (average 1250 MBps) 00:07:48.845 00:07:48.845 08:03:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:48.845 08:03:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:48.845 08:03:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:48.845 08:03:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:48.845 08:03:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:48.845 08:03:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:48.845 08:03:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:48.845 08:03:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:48.845 08:03:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:48.845 08:03:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:48.845 [2024-06-10 08:03:10.684843] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:48.845 [2024-06-10 08:03:10.685007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63929 ] 00:07:48.845 { 00:07:48.845 "subsystems": [ 00:07:48.845 { 00:07:48.845 "subsystem": "bdev", 00:07:48.845 "config": [ 00:07:48.845 { 00:07:48.846 "params": { 00:07:48.846 "trtype": "pcie", 00:07:48.846 "traddr": "0000:00:10.0", 00:07:48.846 "name": "Nvme0" 00:07:48.846 }, 00:07:48.846 "method": "bdev_nvme_attach_controller" 00:07:48.846 }, 00:07:48.846 { 00:07:48.846 "params": { 00:07:48.846 "trtype": "pcie", 00:07:48.846 "traddr": "0000:00:11.0", 00:07:48.846 "name": "Nvme1" 00:07:48.846 }, 00:07:48.846 "method": "bdev_nvme_attach_controller" 00:07:48.846 }, 00:07:48.846 { 00:07:48.846 "method": "bdev_wait_for_examine" 00:07:48.846 } 00:07:48.846 ] 00:07:48.846 } 00:07:48.846 ] 00:07:48.846 } 00:07:49.104 [2024-06-10 08:03:10.826169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.104 [2024-06-10 08:03:10.944454] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.363 [2024-06-10 08:03:11.003862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:49.640  Copying: 5120/5120 [kB] (average 833 MBps) 00:07:49.640 00:07:49.640 08:03:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:49.640 00:07:49.640 real 0m7.672s 00:07:49.640 user 0m5.629s 00:07:49.640 sys 0m3.526s 00:07:49.640 08:03:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:49.640 ************************************ 00:07:49.640 END TEST spdk_dd_bdev_to_bdev 00:07:49.640 ************************************ 00:07:49.640 08:03:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:49.640 08:03:11 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:49.640 08:03:11 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:49.640 08:03:11 spdk_dd -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:49.640 08:03:11 spdk_dd -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:49.640 08:03:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:49.640 ************************************ 00:07:49.640 START TEST spdk_dd_uring 00:07:49.640 ************************************ 00:07:49.640 08:03:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:49.919 * Looking for test storage... 00:07:49.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:49.919 ************************************ 00:07:49.919 START TEST dd_uring_copy 00:07:49.919 ************************************ 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # uring_zram_copy 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=46z1f2fqr70n4ed42fqp5l6t3wizkbv1avqyjzjnwsdo7cbns6x7nricti4ptg6j3nedh2t3kstse1az6yuc58xkxkiiwme7q9jvwxv56rbeefsfuzwwo9o3j0ptrfpt6dg0ud0ay4lb46b864lbzqv6zmbq4i0r8iu50i1i6ksrrksmuagrrwoua3ooypl3kka4c12mmupxc67llm6hug19f0ztjxlsrpsmoq2ql8sin9l4lzy1ofvi7hnap4pnd9k0wce0gnuhnitqvaop2nscydug1r9o3g2nia5zf0uk6swcxf3aub7uizm531k9qfva71xpzg7szv29tugmnxa8pajwoe6j4xp9g3evsq3xnqfyjf5bezamoaweu8vyk28j5a0hrzpruwliwlzkp70jdxa8t5ptih4zrc4hutxtk424sgsvhowzqb3xyk9p82dbg9r8nywso10tz2om8a85uocpogqt6pj76fhxt2bjocxed2antbnhfdtmveycovejk322dp4fj4fz3258pqp91dye7on3nxxk23jffbf7rl6cw2amf5ok0gvfjq0jy59qyx36jw7b8c3ypjbwvwrq1ivtp916vfrq4q7rd75wmdsky7l4hnrhnbfujky64dvd1q7pdp4edcm4h1jyptdrkoofitq5b7qzoqsxqbblzv6kfh9yq7636enj30g8rxjj3nqnoncze6x4uknwippx92nxy8d7wtqei14yiv9k057hbnkv6v9q5qza08uavd87o5nduo7llleb4brdln2fsq8o15c69hocrtbpjmgi7btzo6793frcdmoyqzjm65nwuimpaetjbjut1vpp5fpv7c0oz4zrmobmb8cm8ahr9c0in6a7nz2ukuvapvgn35pl4urrefwklmisea6k1arfsqyp5mhaz8l4y44nml7mqutxu4wthby45z2s14tbi20m2d4zvutda0rrspuilr668mauu5tu3ycptwcu8qnxzlt9tm9s8k5fgbw5xfh8 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 46z1f2fqr70n4ed42fqp5l6t3wizkbv1avqyjzjnwsdo7cbns6x7nricti4ptg6j3nedh2t3kstse1az6yuc58xkxkiiwme7q9jvwxv56rbeefsfuzwwo9o3j0ptrfpt6dg0ud0ay4lb46b864lbzqv6zmbq4i0r8iu50i1i6ksrrksmuagrrwoua3ooypl3kka4c12mmupxc67llm6hug19f0ztjxlsrpsmoq2ql8sin9l4lzy1ofvi7hnap4pnd9k0wce0gnuhnitqvaop2nscydug1r9o3g2nia5zf0uk6swcxf3aub7uizm531k9qfva71xpzg7szv29tugmnxa8pajwoe6j4xp9g3evsq3xnqfyjf5bezamoaweu8vyk28j5a0hrzpruwliwlzkp70jdxa8t5ptih4zrc4hutxtk424sgsvhowzqb3xyk9p82dbg9r8nywso10tz2om8a85uocpogqt6pj76fhxt2bjocxed2antbnhfdtmveycovejk322dp4fj4fz3258pqp91dye7on3nxxk23jffbf7rl6cw2amf5ok0gvfjq0jy59qyx36jw7b8c3ypjbwvwrq1ivtp916vfrq4q7rd75wmdsky7l4hnrhnbfujky64dvd1q7pdp4edcm4h1jyptdrkoofitq5b7qzoqsxqbblzv6kfh9yq7636enj30g8rxjj3nqnoncze6x4uknwippx92nxy8d7wtqei14yiv9k057hbnkv6v9q5qza08uavd87o5nduo7llleb4brdln2fsq8o15c69hocrtbpjmgi7btzo6793frcdmoyqzjm65nwuimpaetjbjut1vpp5fpv7c0oz4zrmobmb8cm8ahr9c0in6a7nz2ukuvapvgn35pl4urrefwklmisea6k1arfsqyp5mhaz8l4y44nml7mqutxu4wthby45z2s14tbi20m2d4zvutda0rrspuilr668mauu5tu3ycptwcu8qnxzlt9tm9s8k5fgbw5xfh8 00:07:49.919 08:03:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:49.919 [2024-06-10 08:03:11.676621] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:49.919 [2024-06-10 08:03:11.676719] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63999 ] 00:07:50.179 [2024-06-10 08:03:11.815870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.179 [2024-06-10 08:03:11.928209] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.179 [2024-06-10 08:03:11.986885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:51.315  Copying: 511/511 [MB] (average 1286 MBps) 00:07:51.315 00:07:51.315 08:03:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:51.315 08:03:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:51.315 08:03:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:51.315 08:03:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:51.315 [2024-06-10 08:03:13.073970] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:51.315 [2024-06-10 08:03:13.074059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64015 ] 00:07:51.315 { 00:07:51.315 "subsystems": [ 00:07:51.315 { 00:07:51.315 "subsystem": "bdev", 00:07:51.315 "config": [ 00:07:51.315 { 00:07:51.315 "params": { 00:07:51.315 "block_size": 512, 00:07:51.315 "num_blocks": 1048576, 00:07:51.315 "name": "malloc0" 00:07:51.315 }, 00:07:51.315 "method": "bdev_malloc_create" 00:07:51.315 }, 00:07:51.315 { 00:07:51.315 "params": { 00:07:51.315 "filename": "/dev/zram1", 00:07:51.315 "name": "uring0" 00:07:51.315 }, 00:07:51.315 "method": "bdev_uring_create" 00:07:51.315 }, 00:07:51.315 { 00:07:51.315 "method": "bdev_wait_for_examine" 00:07:51.315 } 00:07:51.315 ] 00:07:51.315 } 00:07:51.315 ] 00:07:51.315 } 00:07:51.575 [2024-06-10 08:03:13.204944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.575 [2024-06-10 08:03:13.310310] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.575 [2024-06-10 08:03:13.366624] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:54.401  Copying: 228/512 [MB] (228 MBps) Copying: 456/512 [MB] (227 MBps) Copying: 512/512 [MB] (average 227 MBps) 00:07:54.401 00:07:54.401 08:03:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:54.401 08:03:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:54.401 08:03:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:54.401 08:03:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:54.658 [2024-06-10 08:03:16.287499] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:54.658 [2024-06-10 08:03:16.287597] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64059 ] 00:07:54.659 { 00:07:54.659 "subsystems": [ 00:07:54.659 { 00:07:54.659 "subsystem": "bdev", 00:07:54.659 "config": [ 00:07:54.659 { 00:07:54.659 "params": { 00:07:54.659 "block_size": 512, 00:07:54.659 "num_blocks": 1048576, 00:07:54.659 "name": "malloc0" 00:07:54.659 }, 00:07:54.659 "method": "bdev_malloc_create" 00:07:54.659 }, 00:07:54.659 { 00:07:54.659 "params": { 00:07:54.659 "filename": "/dev/zram1", 00:07:54.659 "name": "uring0" 00:07:54.659 }, 00:07:54.659 "method": "bdev_uring_create" 00:07:54.659 }, 00:07:54.659 { 00:07:54.659 "method": "bdev_wait_for_examine" 00:07:54.659 } 00:07:54.659 ] 00:07:54.659 } 00:07:54.659 ] 00:07:54.659 } 00:07:54.659 [2024-06-10 08:03:16.419543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.916 [2024-06-10 08:03:16.537906] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.916 [2024-06-10 08:03:16.594480] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:58.408  Copying: 188/512 [MB] (188 MBps) Copying: 363/512 [MB] (174 MBps) Copying: 512/512 [MB] (average 178 MBps) 00:07:58.408 00:07:58.408 08:03:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:58.408 08:03:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 46z1f2fqr70n4ed42fqp5l6t3wizkbv1avqyjzjnwsdo7cbns6x7nricti4ptg6j3nedh2t3kstse1az6yuc58xkxkiiwme7q9jvwxv56rbeefsfuzwwo9o3j0ptrfpt6dg0ud0ay4lb46b864lbzqv6zmbq4i0r8iu50i1i6ksrrksmuagrrwoua3ooypl3kka4c12mmupxc67llm6hug19f0ztjxlsrpsmoq2ql8sin9l4lzy1ofvi7hnap4pnd9k0wce0gnuhnitqvaop2nscydug1r9o3g2nia5zf0uk6swcxf3aub7uizm531k9qfva71xpzg7szv29tugmnxa8pajwoe6j4xp9g3evsq3xnqfyjf5bezamoaweu8vyk28j5a0hrzpruwliwlzkp70jdxa8t5ptih4zrc4hutxtk424sgsvhowzqb3xyk9p82dbg9r8nywso10tz2om8a85uocpogqt6pj76fhxt2bjocxed2antbnhfdtmveycovejk322dp4fj4fz3258pqp91dye7on3nxxk23jffbf7rl6cw2amf5ok0gvfjq0jy59qyx36jw7b8c3ypjbwvwrq1ivtp916vfrq4q7rd75wmdsky7l4hnrhnbfujky64dvd1q7pdp4edcm4h1jyptdrkoofitq5b7qzoqsxqbblzv6kfh9yq7636enj30g8rxjj3nqnoncze6x4uknwippx92nxy8d7wtqei14yiv9k057hbnkv6v9q5qza08uavd87o5nduo7llleb4brdln2fsq8o15c69hocrtbpjmgi7btzo6793frcdmoyqzjm65nwuimpaetjbjut1vpp5fpv7c0oz4zrmobmb8cm8ahr9c0in6a7nz2ukuvapvgn35pl4urrefwklmisea6k1arfsqyp5mhaz8l4y44nml7mqutxu4wthby45z2s14tbi20m2d4zvutda0rrspuilr668mauu5tu3ycptwcu8qnxzlt9tm9s8k5fgbw5xfh8 == \4\6\z\1\f\2\f\q\r\7\0\n\4\e\d\4\2\f\q\p\5\l\6\t\3\w\i\z\k\b\v\1\a\v\q\y\j\z\j\n\w\s\d\o\7\c\b\n\s\6\x\7\n\r\i\c\t\i\4\p\t\g\6\j\3\n\e\d\h\2\t\3\k\s\t\s\e\1\a\z\6\y\u\c\5\8\x\k\x\k\i\i\w\m\e\7\q\9\j\v\w\x\v\5\6\r\b\e\e\f\s\f\u\z\w\w\o\9\o\3\j\0\p\t\r\f\p\t\6\d\g\0\u\d\0\a\y\4\l\b\4\6\b\8\6\4\l\b\z\q\v\6\z\m\b\q\4\i\0\r\8\i\u\5\0\i\1\i\6\k\s\r\r\k\s\m\u\a\g\r\r\w\o\u\a\3\o\o\y\p\l\3\k\k\a\4\c\1\2\m\m\u\p\x\c\6\7\l\l\m\6\h\u\g\1\9\f\0\z\t\j\x\l\s\r\p\s\m\o\q\2\q\l\8\s\i\n\9\l\4\l\z\y\1\o\f\v\i\7\h\n\a\p\4\p\n\d\9\k\0\w\c\e\0\g\n\u\h\n\i\t\q\v\a\o\p\2\n\s\c\y\d\u\g\1\r\9\o\3\g\2\n\i\a\5\z\f\0\u\k\6\s\w\c\x\f\3\a\u\b\7\u\i\z\m\5\3\1\k\9\q\f\v\a\7\1\x\p\z\g\7\s\z\v\2\9\t\u\g\m\n\x\a\8\p\a\j\w\o\e\6\j\4\x\p\9\g\3\e\v\s\q\3\x\n\q\f\y\j\f\5\b\e\z\a\m\o\a\w\e\u\8\v\y\k\2\8\j\5\a\0\h\r\z\p\r\u\w\l\i\w\l\z\k\p\7\0\j\d\x\a\8\t\5\p\t\i\h\4\z\r\c\4\h\u\t\x\t\k\4\2\4\s\g\s\v\h\o\w\z\q\b\3\x\y\k\9\p\8\2\d\b\g\9\r\8\n\y\w\s\o\1\0\t\z\2\o\m\8\a\8\5\u\o\c\p\o\g\q\t\6\p\j\7\6\f\h\x\t\2\b\j\o\c\x\e\d\2\a\n\t\b\n\h\f\d\t\m\v\e\y\c\o\v\e\j\k\3\2\2\d\p\4\f\j\4\f\z\3\2\5\8\p\q\p\9\1\d\y\e\7\o\n\3\n\x\x\k\2\3\j\f\f\b\f\7\r\l\6\c\w\2\a\m\f\5\o\k\0\g\v\f\j\q\0\j\y\5\9\q\y\x\3\6\j\w\7\b\8\c\3\y\p\j\b\w\v\w\r\q\1\i\v\t\p\9\1\6\v\f\r\q\4\q\7\r\d\7\5\w\m\d\s\k\y\7\l\4\h\n\r\h\n\b\f\u\j\k\y\6\4\d\v\d\1\q\7\p\d\p\4\e\d\c\m\4\h\1\j\y\p\t\d\r\k\o\o\f\i\t\q\5\b\7\q\z\o\q\s\x\q\b\b\l\z\v\6\k\f\h\9\y\q\7\6\3\6\e\n\j\3\0\g\8\r\x\j\j\3\n\q\n\o\n\c\z\e\6\x\4\u\k\n\w\i\p\p\x\9\2\n\x\y\8\d\7\w\t\q\e\i\1\4\y\i\v\9\k\0\5\7\h\b\n\k\v\6\v\9\q\5\q\z\a\0\8\u\a\v\d\8\7\o\5\n\d\u\o\7\l\l\l\e\b\4\b\r\d\l\n\2\f\s\q\8\o\1\5\c\6\9\h\o\c\r\t\b\p\j\m\g\i\7\b\t\z\o\6\7\9\3\f\r\c\d\m\o\y\q\z\j\m\6\5\n\w\u\i\m\p\a\e\t\j\b\j\u\t\1\v\p\p\5\f\p\v\7\c\0\o\z\4\z\r\m\o\b\m\b\8\c\m\8\a\h\r\9\c\0\i\n\6\a\7\n\z\2\u\k\u\v\a\p\v\g\n\3\5\p\l\4\u\r\r\e\f\w\k\l\m\i\s\e\a\6\k\1\a\r\f\s\q\y\p\5\m\h\a\z\8\l\4\y\4\4\n\m\l\7\m\q\u\t\x\u\4\w\t\h\b\y\4\5\z\2\s\1\4\t\b\i\2\0\m\2\d\4\z\v\u\t\d\a\0\r\r\s\p\u\i\l\r\6\6\8\m\a\u\u\5\t\u\3\y\c\p\t\w\c\u\8\q\n\x\z\l\t\9\t\m\9\s\8\k\5\f\g\b\w\5\x\f\h\8 ]] 00:07:58.408 08:03:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:58.408 08:03:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 46z1f2fqr70n4ed42fqp5l6t3wizkbv1avqyjzjnwsdo7cbns6x7nricti4ptg6j3nedh2t3kstse1az6yuc58xkxkiiwme7q9jvwxv56rbeefsfuzwwo9o3j0ptrfpt6dg0ud0ay4lb46b864lbzqv6zmbq4i0r8iu50i1i6ksrrksmuagrrwoua3ooypl3kka4c12mmupxc67llm6hug19f0ztjxlsrpsmoq2ql8sin9l4lzy1ofvi7hnap4pnd9k0wce0gnuhnitqvaop2nscydug1r9o3g2nia5zf0uk6swcxf3aub7uizm531k9qfva71xpzg7szv29tugmnxa8pajwoe6j4xp9g3evsq3xnqfyjf5bezamoaweu8vyk28j5a0hrzpruwliwlzkp70jdxa8t5ptih4zrc4hutxtk424sgsvhowzqb3xyk9p82dbg9r8nywso10tz2om8a85uocpogqt6pj76fhxt2bjocxed2antbnhfdtmveycovejk322dp4fj4fz3258pqp91dye7on3nxxk23jffbf7rl6cw2amf5ok0gvfjq0jy59qyx36jw7b8c3ypjbwvwrq1ivtp916vfrq4q7rd75wmdsky7l4hnrhnbfujky64dvd1q7pdp4edcm4h1jyptdrkoofitq5b7qzoqsxqbblzv6kfh9yq7636enj30g8rxjj3nqnoncze6x4uknwippx92nxy8d7wtqei14yiv9k057hbnkv6v9q5qza08uavd87o5nduo7llleb4brdln2fsq8o15c69hocrtbpjmgi7btzo6793frcdmoyqzjm65nwuimpaetjbjut1vpp5fpv7c0oz4zrmobmb8cm8ahr9c0in6a7nz2ukuvapvgn35pl4urrefwklmisea6k1arfsqyp5mhaz8l4y44nml7mqutxu4wthby45z2s14tbi20m2d4zvutda0rrspuilr668mauu5tu3ycptwcu8qnxzlt9tm9s8k5fgbw5xfh8 == \4\6\z\1\f\2\f\q\r\7\0\n\4\e\d\4\2\f\q\p\5\l\6\t\3\w\i\z\k\b\v\1\a\v\q\y\j\z\j\n\w\s\d\o\7\c\b\n\s\6\x\7\n\r\i\c\t\i\4\p\t\g\6\j\3\n\e\d\h\2\t\3\k\s\t\s\e\1\a\z\6\y\u\c\5\8\x\k\x\k\i\i\w\m\e\7\q\9\j\v\w\x\v\5\6\r\b\e\e\f\s\f\u\z\w\w\o\9\o\3\j\0\p\t\r\f\p\t\6\d\g\0\u\d\0\a\y\4\l\b\4\6\b\8\6\4\l\b\z\q\v\6\z\m\b\q\4\i\0\r\8\i\u\5\0\i\1\i\6\k\s\r\r\k\s\m\u\a\g\r\r\w\o\u\a\3\o\o\y\p\l\3\k\k\a\4\c\1\2\m\m\u\p\x\c\6\7\l\l\m\6\h\u\g\1\9\f\0\z\t\j\x\l\s\r\p\s\m\o\q\2\q\l\8\s\i\n\9\l\4\l\z\y\1\o\f\v\i\7\h\n\a\p\4\p\n\d\9\k\0\w\c\e\0\g\n\u\h\n\i\t\q\v\a\o\p\2\n\s\c\y\d\u\g\1\r\9\o\3\g\2\n\i\a\5\z\f\0\u\k\6\s\w\c\x\f\3\a\u\b\7\u\i\z\m\5\3\1\k\9\q\f\v\a\7\1\x\p\z\g\7\s\z\v\2\9\t\u\g\m\n\x\a\8\p\a\j\w\o\e\6\j\4\x\p\9\g\3\e\v\s\q\3\x\n\q\f\y\j\f\5\b\e\z\a\m\o\a\w\e\u\8\v\y\k\2\8\j\5\a\0\h\r\z\p\r\u\w\l\i\w\l\z\k\p\7\0\j\d\x\a\8\t\5\p\t\i\h\4\z\r\c\4\h\u\t\x\t\k\4\2\4\s\g\s\v\h\o\w\z\q\b\3\x\y\k\9\p\8\2\d\b\g\9\r\8\n\y\w\s\o\1\0\t\z\2\o\m\8\a\8\5\u\o\c\p\o\g\q\t\6\p\j\7\6\f\h\x\t\2\b\j\o\c\x\e\d\2\a\n\t\b\n\h\f\d\t\m\v\e\y\c\o\v\e\j\k\3\2\2\d\p\4\f\j\4\f\z\3\2\5\8\p\q\p\9\1\d\y\e\7\o\n\3\n\x\x\k\2\3\j\f\f\b\f\7\r\l\6\c\w\2\a\m\f\5\o\k\0\g\v\f\j\q\0\j\y\5\9\q\y\x\3\6\j\w\7\b\8\c\3\y\p\j\b\w\v\w\r\q\1\i\v\t\p\9\1\6\v\f\r\q\4\q\7\r\d\7\5\w\m\d\s\k\y\7\l\4\h\n\r\h\n\b\f\u\j\k\y\6\4\d\v\d\1\q\7\p\d\p\4\e\d\c\m\4\h\1\j\y\p\t\d\r\k\o\o\f\i\t\q\5\b\7\q\z\o\q\s\x\q\b\b\l\z\v\6\k\f\h\9\y\q\7\6\3\6\e\n\j\3\0\g\8\r\x\j\j\3\n\q\n\o\n\c\z\e\6\x\4\u\k\n\w\i\p\p\x\9\2\n\x\y\8\d\7\w\t\q\e\i\1\4\y\i\v\9\k\0\5\7\h\b\n\k\v\6\v\9\q\5\q\z\a\0\8\u\a\v\d\8\7\o\5\n\d\u\o\7\l\l\l\e\b\4\b\r\d\l\n\2\f\s\q\8\o\1\5\c\6\9\h\o\c\r\t\b\p\j\m\g\i\7\b\t\z\o\6\7\9\3\f\r\c\d\m\o\y\q\z\j\m\6\5\n\w\u\i\m\p\a\e\t\j\b\j\u\t\1\v\p\p\5\f\p\v\7\c\0\o\z\4\z\r\m\o\b\m\b\8\c\m\8\a\h\r\9\c\0\i\n\6\a\7\n\z\2\u\k\u\v\a\p\v\g\n\3\5\p\l\4\u\r\r\e\f\w\k\l\m\i\s\e\a\6\k\1\a\r\f\s\q\y\p\5\m\h\a\z\8\l\4\y\4\4\n\m\l\7\m\q\u\t\x\u\4\w\t\h\b\y\4\5\z\2\s\1\4\t\b\i\2\0\m\2\d\4\z\v\u\t\d\a\0\r\r\s\p\u\i\l\r\6\6\8\m\a\u\u\5\t\u\3\y\c\p\t\w\c\u\8\q\n\x\z\l\t\9\t\m\9\s\8\k\5\f\g\b\w\5\x\f\h\8 ]] 00:07:58.408 08:03:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:58.666 08:03:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:58.666 08:03:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:58.666 08:03:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:58.666 08:03:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:58.924 [2024-06-10 08:03:20.556487] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:07:58.924 [2024-06-10 08:03:20.556635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64128 ] 00:07:58.924 { 00:07:58.925 "subsystems": [ 00:07:58.925 { 00:07:58.925 "subsystem": "bdev", 00:07:58.925 "config": [ 00:07:58.925 { 00:07:58.925 "params": { 00:07:58.925 "block_size": 512, 00:07:58.925 "num_blocks": 1048576, 00:07:58.925 "name": "malloc0" 00:07:58.925 }, 00:07:58.925 "method": "bdev_malloc_create" 00:07:58.925 }, 00:07:58.925 { 00:07:58.925 "params": { 00:07:58.925 "filename": "/dev/zram1", 00:07:58.925 "name": "uring0" 00:07:58.925 }, 00:07:58.925 "method": "bdev_uring_create" 00:07:58.925 }, 00:07:58.925 { 00:07:58.925 "method": "bdev_wait_for_examine" 00:07:58.925 } 00:07:58.925 ] 00:07:58.925 } 00:07:58.925 ] 00:07:58.925 } 00:07:58.925 [2024-06-10 08:03:20.697662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.184 [2024-06-10 08:03:20.818400] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.184 [2024-06-10 08:03:20.878333] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:03.272  Copying: 151/512 [MB] (151 MBps) Copying: 303/512 [MB] (151 MBps) Copying: 453/512 [MB] (149 MBps) Copying: 512/512 [MB] (average 151 MBps) 00:08:03.272 00:08:03.272 08:03:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:03.272 08:03:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:03.272 08:03:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:03.272 08:03:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:03.272 08:03:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:03.272 08:03:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:03.272 08:03:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:03.272 08:03:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:03.272 [2024-06-10 08:03:24.970410] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:08:03.272 [2024-06-10 08:03:24.970516] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64191 ] 00:08:03.272 { 00:08:03.272 "subsystems": [ 00:08:03.272 { 00:08:03.272 "subsystem": "bdev", 00:08:03.272 "config": [ 00:08:03.272 { 00:08:03.272 "params": { 00:08:03.272 "block_size": 512, 00:08:03.272 "num_blocks": 1048576, 00:08:03.272 "name": "malloc0" 00:08:03.272 }, 00:08:03.272 "method": "bdev_malloc_create" 00:08:03.272 }, 00:08:03.272 { 00:08:03.272 "params": { 00:08:03.272 "filename": "/dev/zram1", 00:08:03.272 "name": "uring0" 00:08:03.272 }, 00:08:03.272 "method": "bdev_uring_create" 00:08:03.272 }, 00:08:03.272 { 00:08:03.272 "params": { 00:08:03.272 "name": "uring0" 00:08:03.272 }, 00:08:03.272 "method": "bdev_uring_delete" 00:08:03.272 }, 00:08:03.272 { 00:08:03.272 "method": "bdev_wait_for_examine" 00:08:03.272 } 00:08:03.272 ] 00:08:03.272 } 00:08:03.272 ] 00:08:03.272 } 00:08:03.531 [2024-06-10 08:03:25.187399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.531 [2024-06-10 08:03:25.308051] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.531 [2024-06-10 08:03:25.366858] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:04.366  Copying: 0/0 [B] (average 0 Bps) 00:08:04.366 00:08:04.366 08:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:04.366 08:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:04.366 08:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:04.366 08:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@649 -- # local es=0 00:08:04.366 08:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:04.366 08:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:04.366 08:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:04.366 08:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.366 08:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:04.366 08:03:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.366 08:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:04.366 08:03:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.366 08:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:04.366 08:03:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.366 08:03:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:04.366 08:03:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:04.366 [2024-06-10 08:03:26.058680] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:08:04.366 [2024-06-10 08:03:26.058820] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64232 ] 00:08:04.366 { 00:08:04.366 "subsystems": [ 00:08:04.366 { 00:08:04.366 "subsystem": "bdev", 00:08:04.366 "config": [ 00:08:04.366 { 00:08:04.366 "params": { 00:08:04.366 "block_size": 512, 00:08:04.366 "num_blocks": 1048576, 00:08:04.366 "name": "malloc0" 00:08:04.366 }, 00:08:04.366 "method": "bdev_malloc_create" 00:08:04.366 }, 00:08:04.366 { 00:08:04.366 "params": { 00:08:04.366 "filename": "/dev/zram1", 00:08:04.366 "name": "uring0" 00:08:04.366 }, 00:08:04.366 "method": "bdev_uring_create" 00:08:04.366 }, 00:08:04.366 { 00:08:04.366 "params": { 00:08:04.366 "name": "uring0" 00:08:04.366 }, 00:08:04.366 "method": "bdev_uring_delete" 00:08:04.366 }, 00:08:04.366 { 00:08:04.366 "method": "bdev_wait_for_examine" 00:08:04.366 } 00:08:04.366 ] 00:08:04.366 } 00:08:04.366 ] 00:08:04.366 } 00:08:04.366 [2024-06-10 08:03:26.197787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.625 [2024-06-10 08:03:26.318437] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.625 [2024-06-10 08:03:26.375293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:04.884 [2024-06-10 08:03:26.588489] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:04.884 [2024-06-10 08:03:26.588630] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:04.884 [2024-06-10 08:03:26.588657] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:04.884 [2024-06-10 08:03:26.588666] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:05.143 [2024-06-10 08:03:26.927033] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:05.402 08:03:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # es=237 00:08:05.402 08:03:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:05.402 08:03:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # es=109 00:08:05.402 08:03:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # case "$es" in 00:08:05.402 08:03:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@669 -- # es=1 00:08:05.402 08:03:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:05.402 08:03:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:05.402 08:03:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:08:05.402 08:03:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:08:05.402 08:03:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:08:05.402 08:03:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:08:05.402 08:03:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:05.402 00:08:05.402 ************************************ 00:08:05.402 END TEST dd_uring_copy 00:08:05.402 ************************************ 00:08:05.402 real 0m15.638s 00:08:05.402 user 0m10.459s 00:08:05.402 sys 0m12.483s 00:08:05.402 08:03:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:05.402 08:03:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:05.661 ************************************ 00:08:05.661 END TEST spdk_dd_uring 00:08:05.661 ************************************ 00:08:05.661 00:08:05.661 real 0m15.780s 00:08:05.661 user 0m10.508s 00:08:05.661 sys 0m12.576s 00:08:05.661 08:03:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:05.661 08:03:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:05.661 08:03:27 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:05.661 08:03:27 spdk_dd -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:05.661 08:03:27 spdk_dd -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:05.661 08:03:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:05.661 ************************************ 00:08:05.661 START TEST spdk_dd_sparse 00:08:05.661 ************************************ 00:08:05.661 08:03:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:05.661 * Looking for test storage... 00:08:05.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:05.661 08:03:27 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:05.661 08:03:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.661 08:03:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.661 08:03:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.661 08:03:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.661 08:03:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.661 08:03:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.661 08:03:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:05.661 08:03:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.661 08:03:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:05.661 08:03:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:05.661 08:03:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:05.661 08:03:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:05.661 08:03:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:05.661 08:03:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:05.661 08:03:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:05.661 08:03:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:05.661 08:03:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:05.661 08:03:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:05.661 08:03:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:05.661 1+0 records in 00:08:05.661 1+0 records out 00:08:05.661 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00619172 s, 677 MB/s 00:08:05.662 08:03:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:05.662 1+0 records in 00:08:05.662 1+0 records out 00:08:05.662 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0074689 s, 562 MB/s 00:08:05.662 08:03:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:05.662 1+0 records in 00:08:05.662 1+0 records out 00:08:05.662 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00505519 s, 830 MB/s 00:08:05.662 08:03:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:05.662 08:03:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:05.662 08:03:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:05.662 08:03:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:05.662 ************************************ 00:08:05.662 START TEST dd_sparse_file_to_file 00:08:05.662 ************************************ 00:08:05.662 08:03:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # file_to_file 00:08:05.662 08:03:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:05.662 08:03:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:05.662 08:03:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:05.662 08:03:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:05.662 08:03:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:05.662 08:03:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:05.662 08:03:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:05.662 08:03:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:05.662 08:03:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:05.662 08:03:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:05.662 { 00:08:05.662 "subsystems": [ 00:08:05.662 { 00:08:05.662 "subsystem": "bdev", 00:08:05.662 "config": [ 00:08:05.662 { 00:08:05.662 "params": { 00:08:05.662 "block_size": 4096, 00:08:05.662 "filename": "dd_sparse_aio_disk", 00:08:05.662 "name": "dd_aio" 00:08:05.662 }, 00:08:05.662 "method": "bdev_aio_create" 00:08:05.662 }, 00:08:05.662 { 00:08:05.662 "params": { 00:08:05.662 "lvs_name": "dd_lvstore", 00:08:05.662 "bdev_name": "dd_aio" 00:08:05.662 }, 00:08:05.662 "method": "bdev_lvol_create_lvstore" 00:08:05.662 }, 00:08:05.662 { 00:08:05.662 "method": "bdev_wait_for_examine" 00:08:05.662 } 00:08:05.662 ] 00:08:05.662 } 00:08:05.662 ] 00:08:05.662 } 00:08:05.662 [2024-06-10 08:03:27.499192] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:08:05.662 [2024-06-10 08:03:27.499293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64323 ] 00:08:05.921 [2024-06-10 08:03:27.638314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.921 [2024-06-10 08:03:27.753344] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.180 [2024-06-10 08:03:27.811380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:06.439  Copying: 12/36 [MB] (average 1090 MBps) 00:08:06.439 00:08:06.439 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:06.439 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:06.439 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:06.439 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:06.439 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:06.439 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:06.439 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:06.439 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:06.439 ************************************ 00:08:06.439 END TEST dd_sparse_file_to_file 00:08:06.439 ************************************ 00:08:06.439 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:06.439 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:06.439 00:08:06.439 real 0m0.734s 00:08:06.439 user 0m0.453s 00:08:06.439 sys 0m0.361s 00:08:06.439 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:06.439 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:06.439 08:03:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:06.439 08:03:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:06.439 08:03:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:06.439 08:03:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:06.439 ************************************ 00:08:06.439 START TEST dd_sparse_file_to_bdev 00:08:06.439 ************************************ 00:08:06.440 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # file_to_bdev 00:08:06.440 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:06.440 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:06.440 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:06.440 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:06.440 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:06.440 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:06.440 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:06.440 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:06.440 [2024-06-10 08:03:28.271901] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:08:06.440 [2024-06-10 08:03:28.271993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64366 ] 00:08:06.440 { 00:08:06.440 "subsystems": [ 00:08:06.440 { 00:08:06.440 "subsystem": "bdev", 00:08:06.440 "config": [ 00:08:06.440 { 00:08:06.440 "params": { 00:08:06.440 "block_size": 4096, 00:08:06.440 "filename": "dd_sparse_aio_disk", 00:08:06.440 "name": "dd_aio" 00:08:06.440 }, 00:08:06.440 "method": "bdev_aio_create" 00:08:06.440 }, 00:08:06.440 { 00:08:06.440 "params": { 00:08:06.440 "lvs_name": "dd_lvstore", 00:08:06.440 "lvol_name": "dd_lvol", 00:08:06.440 "size_in_mib": 36, 00:08:06.440 "thin_provision": true 00:08:06.440 }, 00:08:06.440 "method": "bdev_lvol_create" 00:08:06.440 }, 00:08:06.440 { 00:08:06.440 "method": "bdev_wait_for_examine" 00:08:06.440 } 00:08:06.440 ] 00:08:06.440 } 00:08:06.440 ] 00:08:06.440 } 00:08:06.699 [2024-06-10 08:03:28.402367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.699 [2024-06-10 08:03:28.500925] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.699 [2024-06-10 08:03:28.557769] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:07.216  Copying: 12/36 [MB] (average 521 MBps) 00:08:07.216 00:08:07.216 ************************************ 00:08:07.216 END TEST dd_sparse_file_to_bdev 00:08:07.216 ************************************ 00:08:07.216 00:08:07.216 real 0m0.666s 00:08:07.216 user 0m0.431s 00:08:07.216 sys 0m0.347s 00:08:07.216 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:07.216 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:07.216 08:03:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:07.216 08:03:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:07.216 08:03:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:07.216 08:03:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:07.216 ************************************ 00:08:07.216 START TEST dd_sparse_bdev_to_file 00:08:07.216 ************************************ 00:08:07.216 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # bdev_to_file 00:08:07.216 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:07.216 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:07.216 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:07.216 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:07.216 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:07.216 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:07.216 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:07.216 08:03:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:07.216 [2024-06-10 08:03:28.985184] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:08:07.216 [2024-06-10 08:03:28.985286] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64404 ] 00:08:07.216 { 00:08:07.216 "subsystems": [ 00:08:07.216 { 00:08:07.216 "subsystem": "bdev", 00:08:07.216 "config": [ 00:08:07.216 { 00:08:07.216 "params": { 00:08:07.216 "block_size": 4096, 00:08:07.216 "filename": "dd_sparse_aio_disk", 00:08:07.216 "name": "dd_aio" 00:08:07.216 }, 00:08:07.216 "method": "bdev_aio_create" 00:08:07.216 }, 00:08:07.216 { 00:08:07.216 "method": "bdev_wait_for_examine" 00:08:07.216 } 00:08:07.216 ] 00:08:07.216 } 00:08:07.216 ] 00:08:07.216 } 00:08:07.475 [2024-06-10 08:03:29.118887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.475 [2024-06-10 08:03:29.210263] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.475 [2024-06-10 08:03:29.265684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:07.733  Copying: 12/36 [MB] (average 857 MBps) 00:08:07.733 00:08:07.733 08:03:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:07.733 08:03:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:07.733 08:03:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:07.733 08:03:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:07.733 08:03:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:07.733 08:03:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:07.733 08:03:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:07.992 08:03:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:07.992 ************************************ 00:08:07.992 END TEST dd_sparse_bdev_to_file 00:08:07.992 ************************************ 00:08:07.992 08:03:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:07.992 08:03:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:07.992 00:08:07.992 real 0m0.661s 00:08:07.992 user 0m0.415s 00:08:07.992 sys 0m0.353s 00:08:07.992 08:03:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:07.992 08:03:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:07.992 08:03:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:07.992 08:03:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:07.992 08:03:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:07.992 08:03:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:07.992 08:03:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:07.992 ************************************ 00:08:07.992 END TEST spdk_dd_sparse 00:08:07.992 ************************************ 00:08:07.992 00:08:07.992 real 0m2.352s 00:08:07.992 user 0m1.394s 00:08:07.992 sys 0m1.245s 00:08:07.992 08:03:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:07.992 08:03:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:07.992 08:03:29 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:07.992 08:03:29 spdk_dd -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:07.992 08:03:29 spdk_dd -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:07.992 08:03:29 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:07.992 ************************************ 00:08:07.992 START TEST spdk_dd_negative 00:08:07.992 ************************************ 00:08:07.992 08:03:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:07.992 * Looking for test storage... 00:08:07.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:07.992 08:03:29 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:07.992 08:03:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.992 08:03:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.992 08:03:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.992 08:03:29 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:07.993 ************************************ 00:08:07.993 START TEST dd_invalid_arguments 00:08:07.993 ************************************ 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # invalid_arguments 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@649 -- # local es=0 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:07.993 08:03:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:08.252 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:08.252 00:08:08.252 CPU options: 00:08:08.252 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:08.252 (like [0,1,10]) 00:08:08.253 --lcores lcore to CPU mapping list. The list is in the format: 00:08:08.253 [<,lcores[@CPUs]>...] 00:08:08.253 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:08.253 Within the group, '-' is used for range separator, 00:08:08.253 ',' is used for single number separator. 00:08:08.253 '( )' can be omitted for single element group, 00:08:08.253 '@' can be omitted if cpus and lcores have the same value 00:08:08.253 --disable-cpumask-locks Disable CPU core lock files. 00:08:08.253 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:08.253 pollers in the app support interrupt mode) 00:08:08.253 -p, --main-core main (primary) core for DPDK 00:08:08.253 00:08:08.253 Configuration options: 00:08:08.253 -c, --config, --json JSON config file 00:08:08.253 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:08.253 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:08.253 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:08.253 --rpcs-allowed comma-separated list of permitted RPCS 00:08:08.253 --json-ignore-init-errors don't exit on invalid config entry 00:08:08.253 00:08:08.253 Memory options: 00:08:08.253 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:08.253 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:08.253 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:08.253 -R, --huge-unlink unlink huge files after initialization 00:08:08.253 -n, --mem-channels number of memory channels used for DPDK 00:08:08.253 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:08.253 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:08.253 --no-huge run without using hugepages 00:08:08.253 -i, --shm-id shared memory ID (optional) 00:08:08.253 -g, --single-file-segments force creating just one hugetlbfs file 00:08:08.253 00:08:08.253 PCI options: 00:08:08.253 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:08.253 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:08.253 -u, --no-pci disable PCI access 00:08:08.253 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:08.253 00:08:08.253 Log options: 00:08:08.253 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:08.253 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:08.253 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:08.253 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:08.253 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:08:08.253 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:08:08.253 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:08:08.253 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:08:08.253 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:08:08.253 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:08:08.253 virtio_vfio_user, vmd) 00:08:08.253 --silence-noticelog disable notice level logging to stderr 00:08:08.253 00:08:08.253 Trace options: 00:08:08.253 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:08.253 setting 0 to disable trace (default 32768) 00:08:08.253 Tracepoints vary in size and can use more than one trace entry. 00:08:08.253 -e, --tpoint-group [:] 00:08:08.253 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:08.253 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:08.253 [2024-06-10 08:03:29.872187] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:08.253 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:08:08.253 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:08.253 a tracepoint group. First tpoint inside a group can be enabled by 00:08:08.253 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:08.253 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:08.253 in /include/spdk_internal/trace_defs.h 00:08:08.253 00:08:08.253 Other options: 00:08:08.253 -h, --help show this usage 00:08:08.253 -v, --version print SPDK version 00:08:08.253 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:08.253 --env-context Opaque context for use of the env implementation 00:08:08.253 00:08:08.253 Application specific: 00:08:08.253 [--------- DD Options ---------] 00:08:08.253 --if Input file. Must specify either --if or --ib. 00:08:08.253 --ib Input bdev. Must specifier either --if or --ib 00:08:08.253 --of Output file. Must specify either --of or --ob. 00:08:08.253 --ob Output bdev. Must specify either --of or --ob. 00:08:08.253 --iflag Input file flags. 00:08:08.253 --oflag Output file flags. 00:08:08.253 --bs I/O unit size (default: 4096) 00:08:08.253 --qd Queue depth (default: 2) 00:08:08.253 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:08.253 --skip Skip this many I/O units at start of input. (default: 0) 00:08:08.253 --seek Skip this many I/O units at start of output. (default: 0) 00:08:08.253 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:08.253 --sparse Enable hole skipping in input target 00:08:08.253 Available iflag and oflag values: 00:08:08.253 append - append mode 00:08:08.253 direct - use direct I/O for data 00:08:08.253 directory - fail unless a directory 00:08:08.253 dsync - use synchronized I/O for data 00:08:08.253 noatime - do not update access time 00:08:08.253 noctty - do not assign controlling terminal from file 00:08:08.253 nofollow - do not follow symlinks 00:08:08.253 nonblock - use non-blocking I/O 00:08:08.253 sync - use synchronized I/O for data and metadata 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # es=2 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:08.253 00:08:08.253 real 0m0.074s 00:08:08.253 user 0m0.044s 00:08:08.253 sys 0m0.029s 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:08.253 ************************************ 00:08:08.253 END TEST dd_invalid_arguments 00:08:08.253 ************************************ 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:08.253 ************************************ 00:08:08.253 START TEST dd_double_input 00:08:08.253 ************************************ 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # double_input 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@649 -- # local es=0 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:08.253 08:03:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:08.253 [2024-06-10 08:03:29.998094] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:08.253 08:03:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # es=22 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:08.254 00:08:08.254 real 0m0.076s 00:08:08.254 user 0m0.051s 00:08:08.254 sys 0m0.023s 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:08.254 ************************************ 00:08:08.254 END TEST dd_double_input 00:08:08.254 ************************************ 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:08.254 ************************************ 00:08:08.254 START TEST dd_double_output 00:08:08.254 ************************************ 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # double_output 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@649 -- # local es=0 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:08.254 08:03:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:08.513 [2024-06-10 08:03:30.122303] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:08.513 ************************************ 00:08:08.513 END TEST dd_double_output 00:08:08.513 ************************************ 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # es=22 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:08.513 00:08:08.513 real 0m0.071s 00:08:08.513 user 0m0.041s 00:08:08.513 sys 0m0.028s 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:08.513 ************************************ 00:08:08.513 START TEST dd_no_input 00:08:08.513 ************************************ 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # no_input 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@649 -- # local es=0 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:08.513 [2024-06-10 08:03:30.246941] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # es=22 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:08.513 00:08:08.513 real 0m0.072s 00:08:08.513 user 0m0.050s 00:08:08.513 sys 0m0.022s 00:08:08.513 08:03:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:08.514 ************************************ 00:08:08.514 END TEST dd_no_input 00:08:08.514 ************************************ 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:08.514 ************************************ 00:08:08.514 START TEST dd_no_output 00:08:08.514 ************************************ 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # no_output 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@649 -- # local es=0 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:08.514 [2024-06-10 08:03:30.362597] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # es=22 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:08.514 ************************************ 00:08:08.514 END TEST dd_no_output 00:08:08.514 ************************************ 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:08.514 00:08:08.514 real 0m0.062s 00:08:08.514 user 0m0.033s 00:08:08.514 sys 0m0.027s 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:08.514 08:03:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:08.773 08:03:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:08.773 08:03:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:08.773 08:03:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:08.773 08:03:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:08.773 ************************************ 00:08:08.773 START TEST dd_wrong_blocksize 00:08:08.773 ************************************ 00:08:08.773 08:03:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # wrong_blocksize 00:08:08.773 08:03:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:08.773 08:03:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@649 -- # local es=0 00:08:08.773 08:03:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:08.773 08:03:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.773 08:03:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.773 08:03:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.773 08:03:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:08.774 [2024-06-10 08:03:30.483306] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:08.774 ************************************ 00:08:08.774 END TEST dd_wrong_blocksize 00:08:08.774 ************************************ 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # es=22 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:08.774 00:08:08.774 real 0m0.071s 00:08:08.774 user 0m0.044s 00:08:08.774 sys 0m0.026s 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:08.774 ************************************ 00:08:08.774 START TEST dd_smaller_blocksize 00:08:08.774 ************************************ 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # smaller_blocksize 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@649 -- # local es=0 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:08.774 08:03:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:08.774 [2024-06-10 08:03:30.595033] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:08:08.774 [2024-06-10 08:03:30.595111] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64621 ] 00:08:09.038 [2024-06-10 08:03:30.731552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.038 [2024-06-10 08:03:30.854562] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.304 [2024-06-10 08:03:30.912551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:09.563 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:09.563 [2024-06-10 08:03:31.258463] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:09.563 [2024-06-10 08:03:31.258532] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.563 [2024-06-10 08:03:31.375827] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # es=244 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # es=116 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # case "$es" in 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@669 -- # es=1 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:09.822 00:08:09.822 real 0m0.920s 00:08:09.822 user 0m0.423s 00:08:09.822 sys 0m0.390s 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:09.822 ************************************ 00:08:09.822 END TEST dd_smaller_blocksize 00:08:09.822 ************************************ 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:09.822 ************************************ 00:08:09.822 START TEST dd_invalid_count 00:08:09.822 ************************************ 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # invalid_count 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@649 -- # local es=0 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:09.822 [2024-06-10 08:03:31.579482] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # es=22 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:09.822 ************************************ 00:08:09.822 END TEST dd_invalid_count 00:08:09.822 ************************************ 00:08:09.822 00:08:09.822 real 0m0.074s 00:08:09.822 user 0m0.046s 00:08:09.822 sys 0m0.026s 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:09.822 ************************************ 00:08:09.822 START TEST dd_invalid_oflag 00:08:09.822 ************************************ 00:08:09.822 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # invalid_oflag 00:08:09.823 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:09.823 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@649 -- # local es=0 00:08:09.823 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:09.823 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.823 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:09.823 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.823 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:09.823 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.823 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:09.823 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.823 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:09.823 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:10.082 [2024-06-10 08:03:31.701587] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:10.082 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # es=22 00:08:10.082 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:10.082 ************************************ 00:08:10.082 END TEST dd_invalid_oflag 00:08:10.082 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:10.082 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:10.082 00:08:10.082 real 0m0.078s 00:08:10.082 user 0m0.050s 00:08:10.082 sys 0m0.026s 00:08:10.082 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:10.082 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:10.082 ************************************ 00:08:10.082 08:03:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:08:10.082 08:03:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:10.082 08:03:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:10.082 08:03:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:10.082 ************************************ 00:08:10.082 START TEST dd_invalid_iflag 00:08:10.082 ************************************ 00:08:10.082 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # invalid_iflag 00:08:10.082 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:10.082 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@649 -- # local es=0 00:08:10.082 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:10.082 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.082 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:10.082 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.082 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:10.083 [2024-06-10 08:03:31.829014] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # es=22 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:10.083 00:08:10.083 real 0m0.072s 00:08:10.083 user 0m0.051s 00:08:10.083 sys 0m0.020s 00:08:10.083 ************************************ 00:08:10.083 END TEST dd_invalid_iflag 00:08:10.083 ************************************ 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:10.083 ************************************ 00:08:10.083 START TEST dd_unknown_flag 00:08:10.083 ************************************ 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # unknown_flag 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@649 -- # local es=0 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:10.083 08:03:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:10.342 [2024-06-10 08:03:31.951018] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:08:10.342 [2024-06-10 08:03:31.951091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64721 ] 00:08:10.342 [2024-06-10 08:03:32.080637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.342 [2024-06-10 08:03:32.177738] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.600 [2024-06-10 08:03:32.233935] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:10.600 [2024-06-10 08:03:32.267541] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:10.600 [2024-06-10 08:03:32.267614] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:10.600 [2024-06-10 08:03:32.267692] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:10.600 [2024-06-10 08:03:32.267705] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:10.600 [2024-06-10 08:03:32.267967] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:10.600 [2024-06-10 08:03:32.267984] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:10.600 [2024-06-10 08:03:32.268035] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:10.600 [2024-06-10 08:03:32.268046] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:10.600 [2024-06-10 08:03:32.379924] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # es=234 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # es=106 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # case "$es" in 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@669 -- # es=1 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:10.858 00:08:10.858 real 0m0.576s 00:08:10.858 user 0m0.320s 00:08:10.858 sys 0m0.163s 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:10.858 ************************************ 00:08:10.858 END TEST dd_unknown_flag 00:08:10.858 ************************************ 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:10.858 ************************************ 00:08:10.858 START TEST dd_invalid_json 00:08:10.858 ************************************ 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # invalid_json 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@649 -- # local es=0 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:10.858 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:10.858 [2024-06-10 08:03:32.582844] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:08:10.858 [2024-06-10 08:03:32.582951] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64744 ] 00:08:10.858 [2024-06-10 08:03:32.722104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.117 [2024-06-10 08:03:32.826776] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.117 [2024-06-10 08:03:32.826884] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:11.117 [2024-06-10 08:03:32.826902] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:11.117 [2024-06-10 08:03:32.826911] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.117 [2024-06-10 08:03:32.826961] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:11.117 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # es=234 00:08:11.117 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:11.117 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # es=106 00:08:11.117 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # case "$es" in 00:08:11.117 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@669 -- # es=1 00:08:11.117 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:11.117 00:08:11.117 real 0m0.392s 00:08:11.117 user 0m0.210s 00:08:11.117 sys 0m0.080s 00:08:11.117 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:11.117 ************************************ 00:08:11.117 END TEST dd_invalid_json 00:08:11.117 ************************************ 00:08:11.117 08:03:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:11.117 ************************************ 00:08:11.117 END TEST spdk_dd_negative 00:08:11.117 ************************************ 00:08:11.117 00:08:11.117 real 0m3.241s 00:08:11.117 user 0m1.562s 00:08:11.117 sys 0m1.316s 00:08:11.117 08:03:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:11.117 08:03:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:11.375 ************************************ 00:08:11.375 END TEST spdk_dd 00:08:11.375 ************************************ 00:08:11.375 00:08:11.375 real 1m19.554s 00:08:11.375 user 0m51.736s 00:08:11.375 sys 0m33.926s 00:08:11.375 08:03:33 spdk_dd -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:11.375 08:03:33 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:11.375 08:03:33 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:11.375 08:03:33 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:11.375 08:03:33 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:11.375 08:03:33 -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:11.375 08:03:33 -- common/autotest_common.sh@10 -- # set +x 00:08:11.375 08:03:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:11.375 08:03:33 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:11.375 08:03:33 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:11.375 08:03:33 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:11.375 08:03:33 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:11.375 08:03:33 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:11.375 08:03:33 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:11.375 08:03:33 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:11.375 08:03:33 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:11.375 08:03:33 -- common/autotest_common.sh@10 -- # set +x 00:08:11.375 ************************************ 00:08:11.375 START TEST nvmf_tcp 00:08:11.375 ************************************ 00:08:11.375 08:03:33 nvmf_tcp -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:11.375 * Looking for test storage... 00:08:11.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:11.375 08:03:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:11.375 08:03:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:11.375 08:03:33 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:11.375 08:03:33 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:11.375 08:03:33 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.375 08:03:33 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.375 08:03:33 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.375 08:03:33 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.375 08:03:33 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.375 08:03:33 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.375 08:03:33 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.375 08:03:33 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.375 08:03:33 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.375 08:03:33 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.375 08:03:33 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:08:11.375 08:03:33 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:08:11.375 08:03:33 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.375 08:03:33 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.375 08:03:33 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:11.375 08:03:33 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.376 08:03:33 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.376 08:03:33 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.376 08:03:33 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.376 08:03:33 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.376 08:03:33 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.376 08:03:33 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.376 08:03:33 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.376 08:03:33 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:11.376 08:03:33 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.376 08:03:33 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:11.376 08:03:33 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:11.376 08:03:33 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:11.376 08:03:33 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.376 08:03:33 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.376 08:03:33 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.376 08:03:33 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:11.376 08:03:33 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:11.376 08:03:33 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:11.376 08:03:33 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:11.376 08:03:33 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:11.376 08:03:33 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:11.376 08:03:33 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:11.376 08:03:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:11.376 08:03:33 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:08:11.376 08:03:33 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:11.376 08:03:33 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:11.376 08:03:33 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:11.376 08:03:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:11.376 ************************************ 00:08:11.376 START TEST nvmf_host_management 00:08:11.376 ************************************ 00:08:11.376 08:03:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:11.634 * Looking for test storage... 00:08:11.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:11.635 Cannot find device "nvmf_init_br" 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:11.635 Cannot find device "nvmf_tgt_br" 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:11.635 Cannot find device "nvmf_tgt_br2" 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:11.635 Cannot find device "nvmf_init_br" 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:11.635 Cannot find device "nvmf_tgt_br" 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:11.635 Cannot find device "nvmf_tgt_br2" 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:11.635 Cannot find device "nvmf_br" 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:11.635 Cannot find device "nvmf_init_if" 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:11.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:11.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:11.635 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:11.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:08:11.894 00:08:11.894 --- 10.0.0.2 ping statistics --- 00:08:11.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.894 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:11.894 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:11.894 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:08:11.894 00:08:11.894 --- 10.0.0.3 ping statistics --- 00:08:11.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.894 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:11.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:08:11.894 00:08:11.894 --- 10.0.0.1 ping statistics --- 00:08:11.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.894 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=65001 00:08:11.894 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 65001 00:08:11.895 08:03:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:11.895 08:03:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 65001 ']' 00:08:11.895 08:03:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.895 08:03:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:11.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.895 08:03:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.895 08:03:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:11.895 08:03:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.153 [2024-06-10 08:03:33.816467] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:08:12.153 [2024-06-10 08:03:33.817128] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.153 [2024-06-10 08:03:33.962510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.412 [2024-06-10 08:03:34.089222] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.412 [2024-06-10 08:03:34.089559] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.412 [2024-06-10 08:03:34.089730] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.412 [2024-06-10 08:03:34.089986] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.412 [2024-06-10 08:03:34.090037] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.412 [2024-06-10 08:03:34.090257] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.412 [2024-06-10 08:03:34.090684] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.412 [2024-06-10 08:03:34.090857] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:08:12.412 [2024-06-10 08:03:34.090869] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.412 [2024-06-10 08:03:34.169099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.979 [2024-06-10 08:03:34.758471] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.979 Malloc0 00:08:12.979 [2024-06-10 08:03:34.831536] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:12.979 08:03:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:13.238 08:03:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65064 00:08:13.238 08:03:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65064 /var/tmp/bdevperf.sock 00:08:13.238 08:03:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 65064 ']' 00:08:13.238 08:03:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:13.238 08:03:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:13.238 08:03:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:13.238 08:03:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:13.238 08:03:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:13.238 08:03:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:13.238 08:03:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:13.238 08:03:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.238 08:03:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:13.238 08:03:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:13.238 08:03:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:13.238 { 00:08:13.238 "params": { 00:08:13.238 "name": "Nvme$subsystem", 00:08:13.238 "trtype": "$TEST_TRANSPORT", 00:08:13.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:13.238 "adrfam": "ipv4", 00:08:13.238 "trsvcid": "$NVMF_PORT", 00:08:13.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:13.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:13.238 "hdgst": ${hdgst:-false}, 00:08:13.238 "ddgst": ${ddgst:-false} 00:08:13.238 }, 00:08:13.238 "method": "bdev_nvme_attach_controller" 00:08:13.238 } 00:08:13.238 EOF 00:08:13.238 )") 00:08:13.238 08:03:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:13.238 08:03:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:13.238 08:03:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:13.238 08:03:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:13.238 "params": { 00:08:13.238 "name": "Nvme0", 00:08:13.238 "trtype": "tcp", 00:08:13.238 "traddr": "10.0.0.2", 00:08:13.238 "adrfam": "ipv4", 00:08:13.238 "trsvcid": "4420", 00:08:13.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:13.238 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:13.238 "hdgst": false, 00:08:13.238 "ddgst": false 00:08:13.238 }, 00:08:13.238 "method": "bdev_nvme_attach_controller" 00:08:13.238 }' 00:08:13.238 [2024-06-10 08:03:34.929468] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:08:13.239 [2024-06-10 08:03:34.929711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65064 ] 00:08:13.239 [2024-06-10 08:03:35.070228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.497 [2024-06-10 08:03:35.186371] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.497 [2024-06-10 08:03:35.251766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:13.755 Running I/O for 10 seconds... 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.323 08:03:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.323 [2024-06-10 08:03:35.993427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:14.323 [2024-06-10 08:03:35.993687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.323 [2024-06-10 08:03:35.993709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:14.323 [2024-06-10 08:03:35.993719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:35.993729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:14.324 [2024-06-10 08:03:35.993739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:35.993760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:14.324 [2024-06-10 08:03:35.993769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:35.993792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691580 is same with the state(5) to be set 00:08:14.324 08:03:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.324 08:03:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:14.324 08:03:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.324 08:03:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.324 08:03:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.324 08:03:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:14.324 [2024-06-10 08:03:36.015611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.015890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.015931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.015944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.015957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.015967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.015979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.015989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.324 [2024-06-10 08:03:36.016709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.324 [2024-06-10 08:03:36.016720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.016730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.016742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.016751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.016763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.016772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.016784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.016793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.016804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.017164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.017342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.017499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.017733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.017842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.017860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.017870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.017882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.017892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.017903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.017912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.017924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.017939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.017951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.017961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.017972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.017981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.017992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.018001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.018012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.018022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.018033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.018042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.018053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.018072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.018084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.018094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.018108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.018118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.018131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.018141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.018152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.018162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.018173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.018182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.018193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.018202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.018214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.018223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.018234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.018243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.018255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.018264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.018275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.018296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.018309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.018318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.018330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.018339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.018351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.018360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.018371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.325 [2024-06-10 08:03:36.018380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.325 [2024-06-10 08:03:36.018392] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6925c0 is same with the state(5) to be set 00:08:14.325 [2024-06-10 08:03:36.018469] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6925c0 was disconnected and freed. reset controller. 00:08:14.325 [2024-06-10 08:03:36.018566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x691580 (9): Bad file descriptor 00:08:14.325 [2024-06-10 08:03:36.019658] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:14.325 task offset: 122880 on job bdev=Nvme0n1 fails 00:08:14.325 00:08:14.325 Latency(us) 00:08:14.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.325 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:14.325 Job: Nvme0n1 ended in about 0.65 seconds with error 00:08:14.325 Verification LBA range: start 0x0 length 0x400 00:08:14.325 Nvme0n1 : 0.65 1473.03 92.06 98.20 0.00 39584.77 3381.06 41228.10 00:08:14.325 =================================================================================================================== 00:08:14.325 Total : 1473.03 92.06 98.20 0.00 39584.77 3381.06 41228.10 00:08:14.325 [2024-06-10 08:03:36.022342] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:14.325 [2024-06-10 08:03:36.033972] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:15.262 08:03:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65064 00:08:15.262 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65064) - No such process 00:08:15.262 08:03:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:15.262 08:03:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:15.262 08:03:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:15.262 08:03:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:15.262 08:03:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:15.262 08:03:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:15.262 08:03:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:15.262 08:03:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:15.262 { 00:08:15.262 "params": { 00:08:15.262 "name": "Nvme$subsystem", 00:08:15.262 "trtype": "$TEST_TRANSPORT", 00:08:15.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:15.262 "adrfam": "ipv4", 00:08:15.262 "trsvcid": "$NVMF_PORT", 00:08:15.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:15.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:15.262 "hdgst": ${hdgst:-false}, 00:08:15.262 "ddgst": ${ddgst:-false} 00:08:15.263 }, 00:08:15.263 "method": "bdev_nvme_attach_controller" 00:08:15.263 } 00:08:15.263 EOF 00:08:15.263 )") 00:08:15.263 08:03:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:15.263 08:03:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:15.263 08:03:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:15.263 08:03:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:15.263 "params": { 00:08:15.263 "name": "Nvme0", 00:08:15.263 "trtype": "tcp", 00:08:15.263 "traddr": "10.0.0.2", 00:08:15.263 "adrfam": "ipv4", 00:08:15.263 "trsvcid": "4420", 00:08:15.263 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:15.263 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:15.263 "hdgst": false, 00:08:15.263 "ddgst": false 00:08:15.263 }, 00:08:15.263 "method": "bdev_nvme_attach_controller" 00:08:15.263 }' 00:08:15.263 [2024-06-10 08:03:37.067104] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:08:15.263 [2024-06-10 08:03:37.067204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65103 ] 00:08:15.521 [2024-06-10 08:03:37.205744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.521 [2024-06-10 08:03:37.295861] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.521 [2024-06-10 08:03:37.357872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:15.779 Running I/O for 1 seconds... 00:08:16.728 00:08:16.728 Latency(us) 00:08:16.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.728 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:16.728 Verification LBA range: start 0x0 length 0x400 00:08:16.728 Nvme0n1 : 1.01 1581.34 98.83 0.00 0.00 39678.61 4021.53 37415.10 00:08:16.728 =================================================================================================================== 00:08:16.728 Total : 1581.34 98.83 0.00 0.00 39678.61 4021.53 37415.10 00:08:16.986 08:03:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:16.986 08:03:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:16.986 08:03:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:16.986 08:03:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:16.986 08:03:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:16.986 08:03:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:16.986 08:03:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:16.986 08:03:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:16.986 08:03:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:16.986 08:03:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:16.986 08:03:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:16.986 rmmod nvme_tcp 00:08:16.987 rmmod nvme_fabrics 00:08:16.987 rmmod nvme_keyring 00:08:16.987 08:03:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:17.246 08:03:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:17.246 08:03:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:17.246 08:03:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 65001 ']' 00:08:17.246 08:03:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 65001 00:08:17.246 08:03:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 65001 ']' 00:08:17.246 08:03:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 65001 00:08:17.246 08:03:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:08:17.246 08:03:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:17.246 08:03:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 65001 00:08:17.246 killing process with pid 65001 00:08:17.246 08:03:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:08:17.246 08:03:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:08:17.246 08:03:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 65001' 00:08:17.246 08:03:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 65001 00:08:17.246 08:03:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 65001 00:08:17.505 [2024-06-10 08:03:39.112329] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:17.505 08:03:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:17.505 08:03:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:17.505 08:03:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:17.505 08:03:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:17.505 08:03:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:17.505 08:03:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.505 08:03:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.505 08:03:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.505 08:03:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:17.505 08:03:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:17.505 00:08:17.505 real 0m5.947s 00:08:17.505 user 0m22.732s 00:08:17.505 sys 0m1.556s 00:08:17.505 ************************************ 00:08:17.505 END TEST nvmf_host_management 00:08:17.505 ************************************ 00:08:17.505 08:03:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:17.505 08:03:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.505 08:03:39 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:17.505 08:03:39 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:17.505 08:03:39 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:17.505 08:03:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:17.505 ************************************ 00:08:17.505 START TEST nvmf_lvol 00:08:17.505 ************************************ 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:17.505 * Looking for test storage... 00:08:17.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:17.505 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:17.506 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:17.765 Cannot find device "nvmf_tgt_br" 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:17.765 Cannot find device "nvmf_tgt_br2" 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:17.765 Cannot find device "nvmf_tgt_br" 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:17.765 Cannot find device "nvmf_tgt_br2" 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:17.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:17.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:17.765 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:18.024 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:18.024 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:18.024 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:18.024 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:18.024 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:18.024 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:18.024 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:18.024 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:18.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:08:18.024 00:08:18.024 --- 10.0.0.2 ping statistics --- 00:08:18.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.024 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:08:18.024 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:18.024 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:18.024 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:08:18.024 00:08:18.024 --- 10.0.0.3 ping statistics --- 00:08:18.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.024 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:18.024 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:18.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:08:18.024 00:08:18.024 --- 10.0.0.1 ping statistics --- 00:08:18.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.024 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:18.024 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.024 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:08:18.024 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:18.025 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.025 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:18.025 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:18.025 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.025 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:18.025 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:18.025 08:03:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:18.025 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:18.025 08:03:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:18.025 08:03:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:18.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.025 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=65316 00:08:18.025 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:18.025 08:03:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 65316 00:08:18.025 08:03:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 65316 ']' 00:08:18.025 08:03:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.025 08:03:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:18.025 08:03:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.025 08:03:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:18.025 08:03:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:18.025 [2024-06-10 08:03:39.785399] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:08:18.025 [2024-06-10 08:03:39.785704] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.284 [2024-06-10 08:03:39.921295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:18.284 [2024-06-10 08:03:40.033959] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.284 [2024-06-10 08:03:40.034247] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.284 [2024-06-10 08:03:40.034407] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.284 [2024-06-10 08:03:40.034563] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.284 [2024-06-10 08:03:40.034605] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.284 [2024-06-10 08:03:40.034878] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.284 [2024-06-10 08:03:40.034977] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.284 [2024-06-10 08:03:40.034984] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.284 [2024-06-10 08:03:40.093730] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:19.221 08:03:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:19.221 08:03:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:08:19.221 08:03:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:19.221 08:03:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:19.221 08:03:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:19.221 08:03:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.221 08:03:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:19.480 [2024-06-10 08:03:41.099157] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.480 08:03:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:19.739 08:03:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:19.739 08:03:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:19.998 08:03:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:19.998 08:03:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:20.257 08:03:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:20.514 08:03:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5e4ea3ad-80ad-45cb-9474-04d87131aac4 00:08:20.514 08:03:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5e4ea3ad-80ad-45cb-9474-04d87131aac4 lvol 20 00:08:20.772 08:03:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=4dc960c4-9527-4427-82ac-5775084491f9 00:08:20.772 08:03:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:21.031 08:03:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4dc960c4-9527-4427-82ac-5775084491f9 00:08:21.290 08:03:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:21.548 [2024-06-10 08:03:43.263008] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.548 08:03:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:21.806 08:03:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65397 00:08:21.806 08:03:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:21.806 08:03:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:22.744 08:03:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 4dc960c4-9527-4427-82ac-5775084491f9 MY_SNAPSHOT 00:08:23.313 08:03:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=179c8738-7905-41a8-8d15-afee94f3a2fb 00:08:23.313 08:03:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 4dc960c4-9527-4427-82ac-5775084491f9 30 00:08:23.572 08:03:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 179c8738-7905-41a8-8d15-afee94f3a2fb MY_CLONE 00:08:23.830 08:03:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=2e30f527-47a8-404f-a74a-feec4066bb1a 00:08:23.830 08:03:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 2e30f527-47a8-404f-a74a-feec4066bb1a 00:08:24.089 08:03:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65397 00:08:32.201 Initializing NVMe Controllers 00:08:32.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:32.201 Controller IO queue size 128, less than required. 00:08:32.201 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:32.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:32.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:32.201 Initialization complete. Launching workers. 00:08:32.201 ======================================================== 00:08:32.201 Latency(us) 00:08:32.201 Device Information : IOPS MiB/s Average min max 00:08:32.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10695.18 41.78 11976.38 840.00 66206.27 00:08:32.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10639.78 41.56 12032.94 2398.65 59276.73 00:08:32.201 ======================================================== 00:08:32.201 Total : 21334.95 83.34 12004.59 840.00 66206.27 00:08:32.201 00:08:32.201 08:03:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:32.460 08:03:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4dc960c4-9527-4427-82ac-5775084491f9 00:08:32.718 08:03:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5e4ea3ad-80ad-45cb-9474-04d87131aac4 00:08:32.976 08:03:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:32.976 08:03:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:32.976 08:03:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:32.976 08:03:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:32.976 08:03:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:32.976 08:03:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:32.976 08:03:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:32.976 08:03:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:32.976 08:03:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:32.976 rmmod nvme_tcp 00:08:32.976 rmmod nvme_fabrics 00:08:32.976 rmmod nvme_keyring 00:08:32.976 08:03:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:32.976 08:03:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:32.976 08:03:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:32.976 08:03:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 65316 ']' 00:08:32.976 08:03:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 65316 00:08:32.976 08:03:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 65316 ']' 00:08:32.976 08:03:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 65316 00:08:32.976 08:03:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:08:32.976 08:03:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:32.976 08:03:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 65316 00:08:33.240 killing process with pid 65316 00:08:33.240 08:03:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:33.240 08:03:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:33.240 08:03:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 65316' 00:08:33.240 08:03:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 65316 00:08:33.240 08:03:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 65316 00:08:33.499 08:03:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:33.499 08:03:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:33.499 08:03:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:33.499 08:03:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:33.499 08:03:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:33.500 ************************************ 00:08:33.500 END TEST nvmf_lvol 00:08:33.500 ************************************ 00:08:33.500 00:08:33.500 real 0m15.940s 00:08:33.500 user 1m6.032s 00:08:33.500 sys 0m4.360s 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:33.500 08:03:55 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:33.500 08:03:55 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:33.500 08:03:55 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:33.500 08:03:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:33.500 ************************************ 00:08:33.500 START TEST nvmf_lvs_grow 00:08:33.500 ************************************ 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:33.500 * Looking for test storage... 00:08:33.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:33.500 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:33.759 Cannot find device "nvmf_tgt_br" 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:33.759 Cannot find device "nvmf_tgt_br2" 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:33.759 Cannot find device "nvmf_tgt_br" 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:33.759 Cannot find device "nvmf_tgt_br2" 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:33.759 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:33.759 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:33.759 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:34.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:08:34.018 00:08:34.018 --- 10.0.0.2 ping statistics --- 00:08:34.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.018 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:34.018 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:34.018 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:08:34.018 00:08:34.018 --- 10.0.0.3 ping statistics --- 00:08:34.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.018 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:34.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:34.018 00:08:34.018 --- 10.0.0.1 ping statistics --- 00:08:34.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.018 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65721 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65721 00:08:34.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 65721 ']' 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:34.018 08:03:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:34.018 [2024-06-10 08:03:55.739723] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:08:34.018 [2024-06-10 08:03:55.739843] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.018 [2024-06-10 08:03:55.881912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.277 [2024-06-10 08:03:55.992843] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.277 [2024-06-10 08:03:55.992909] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.277 [2024-06-10 08:03:55.992936] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.277 [2024-06-10 08:03:55.992944] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.277 [2024-06-10 08:03:55.992952] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.277 [2024-06-10 08:03:55.992983] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.277 [2024-06-10 08:03:56.050443] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:34.845 08:03:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:34.845 08:03:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:08:34.845 08:03:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:34.845 08:03:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:34.845 08:03:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:35.104 08:03:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.104 08:03:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:35.363 [2024-06-10 08:03:56.971431] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.363 08:03:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:35.363 08:03:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:35.363 08:03:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:35.363 08:03:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:35.363 ************************************ 00:08:35.363 START TEST lvs_grow_clean 00:08:35.363 ************************************ 00:08:35.363 08:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:08:35.363 08:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:35.363 08:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:35.363 08:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:35.363 08:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:35.363 08:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:35.363 08:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:35.363 08:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:35.363 08:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:35.363 08:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:35.622 08:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:35.622 08:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:35.880 08:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2ed359f0-9df3-4748-8490-26c684d5da37 00:08:35.880 08:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ed359f0-9df3-4748-8490-26c684d5da37 00:08:35.880 08:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:36.139 08:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:36.139 08:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:36.139 08:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2ed359f0-9df3-4748-8490-26c684d5da37 lvol 150 00:08:36.398 08:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9b0e74ad-7f1c-4613-8575-21ad63ddbacc 00:08:36.398 08:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:36.398 08:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:36.656 [2024-06-10 08:03:58.371813] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:36.657 [2024-06-10 08:03:58.371942] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:36.657 true 00:08:36.657 08:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ed359f0-9df3-4748-8490-26c684d5da37 00:08:36.657 08:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:36.916 08:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:36.916 08:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:37.175 08:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9b0e74ad-7f1c-4613-8575-21ad63ddbacc 00:08:37.434 08:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:37.692 [2024-06-10 08:03:59.376329] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.692 08:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:37.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:37.951 08:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65809 00:08:37.951 08:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:37.951 08:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:37.951 08:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65809 /var/tmp/bdevperf.sock 00:08:37.951 08:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 65809 ']' 00:08:37.951 08:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:37.951 08:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:37.951 08:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:37.951 08:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:37.951 08:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:37.951 [2024-06-10 08:03:59.664241] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:08:37.951 [2024-06-10 08:03:59.664535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65809 ] 00:08:37.951 [2024-06-10 08:03:59.797082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.209 [2024-06-10 08:03:59.905084] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.209 [2024-06-10 08:03:59.962169] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:38.775 08:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:38.775 08:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:08:38.775 08:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:39.345 Nvme0n1 00:08:39.345 08:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:39.345 [ 00:08:39.345 { 00:08:39.345 "name": "Nvme0n1", 00:08:39.345 "aliases": [ 00:08:39.345 "9b0e74ad-7f1c-4613-8575-21ad63ddbacc" 00:08:39.345 ], 00:08:39.345 "product_name": "NVMe disk", 00:08:39.345 "block_size": 4096, 00:08:39.345 "num_blocks": 38912, 00:08:39.345 "uuid": "9b0e74ad-7f1c-4613-8575-21ad63ddbacc", 00:08:39.345 "assigned_rate_limits": { 00:08:39.345 "rw_ios_per_sec": 0, 00:08:39.345 "rw_mbytes_per_sec": 0, 00:08:39.345 "r_mbytes_per_sec": 0, 00:08:39.345 "w_mbytes_per_sec": 0 00:08:39.345 }, 00:08:39.345 "claimed": false, 00:08:39.345 "zoned": false, 00:08:39.345 "supported_io_types": { 00:08:39.345 "read": true, 00:08:39.345 "write": true, 00:08:39.345 "unmap": true, 00:08:39.345 "write_zeroes": true, 00:08:39.345 "flush": true, 00:08:39.345 "reset": true, 00:08:39.345 "compare": true, 00:08:39.345 "compare_and_write": true, 00:08:39.345 "abort": true, 00:08:39.345 "nvme_admin": true, 00:08:39.345 "nvme_io": true 00:08:39.345 }, 00:08:39.345 "memory_domains": [ 00:08:39.345 { 00:08:39.345 "dma_device_id": "system", 00:08:39.345 "dma_device_type": 1 00:08:39.345 } 00:08:39.345 ], 00:08:39.345 "driver_specific": { 00:08:39.345 "nvme": [ 00:08:39.345 { 00:08:39.345 "trid": { 00:08:39.345 "trtype": "TCP", 00:08:39.345 "adrfam": "IPv4", 00:08:39.345 "traddr": "10.0.0.2", 00:08:39.345 "trsvcid": "4420", 00:08:39.345 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:39.345 }, 00:08:39.345 "ctrlr_data": { 00:08:39.345 "cntlid": 1, 00:08:39.345 "vendor_id": "0x8086", 00:08:39.345 "model_number": "SPDK bdev Controller", 00:08:39.345 "serial_number": "SPDK0", 00:08:39.345 "firmware_revision": "24.09", 00:08:39.345 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:39.345 "oacs": { 00:08:39.345 "security": 0, 00:08:39.345 "format": 0, 00:08:39.345 "firmware": 0, 00:08:39.345 "ns_manage": 0 00:08:39.345 }, 00:08:39.345 "multi_ctrlr": true, 00:08:39.345 "ana_reporting": false 00:08:39.345 }, 00:08:39.345 "vs": { 00:08:39.345 "nvme_version": "1.3" 00:08:39.345 }, 00:08:39.345 "ns_data": { 00:08:39.345 "id": 1, 00:08:39.345 "can_share": true 00:08:39.345 } 00:08:39.345 } 00:08:39.345 ], 00:08:39.345 "mp_policy": "active_passive" 00:08:39.345 } 00:08:39.345 } 00:08:39.345 ] 00:08:39.605 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:39.605 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65827 00:08:39.605 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:39.605 Running I/O for 10 seconds... 00:08:40.538 Latency(us) 00:08:40.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.538 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.538 Nvme0n1 : 1.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:40.538 =================================================================================================================== 00:08:40.538 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:40.538 00:08:41.474 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2ed359f0-9df3-4748-8490-26c684d5da37 00:08:41.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.474 Nvme0n1 : 2.00 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:08:41.474 =================================================================================================================== 00:08:41.474 Total : 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:08:41.474 00:08:41.732 true 00:08:41.732 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ed359f0-9df3-4748-8490-26c684d5da37 00:08:41.732 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:41.990 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:41.990 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:41.990 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65827 00:08:42.558 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.558 Nvme0n1 : 3.00 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:08:42.558 =================================================================================================================== 00:08:42.558 Total : 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:08:42.558 00:08:43.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.492 Nvme0n1 : 4.00 7334.25 28.65 0.00 0.00 0.00 0.00 0.00 00:08:43.492 =================================================================================================================== 00:08:43.492 Total : 7334.25 28.65 0.00 0.00 0.00 0.00 0.00 00:08:43.492 00:08:44.869 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.869 Nvme0n1 : 5.00 7340.60 28.67 0.00 0.00 0.00 0.00 0.00 00:08:44.869 =================================================================================================================== 00:08:44.869 Total : 7340.60 28.67 0.00 0.00 0.00 0.00 0.00 00:08:44.869 00:08:45.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.819 Nvme0n1 : 6.00 7344.83 28.69 0.00 0.00 0.00 0.00 0.00 00:08:45.819 =================================================================================================================== 00:08:45.819 Total : 7344.83 28.69 0.00 0.00 0.00 0.00 0.00 00:08:45.819 00:08:46.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.755 Nvme0n1 : 7.00 7275.29 28.42 0.00 0.00 0.00 0.00 0.00 00:08:46.755 =================================================================================================================== 00:08:46.755 Total : 7275.29 28.42 0.00 0.00 0.00 0.00 0.00 00:08:46.755 00:08:47.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.691 Nvme0n1 : 8.00 7207.25 28.15 0.00 0.00 0.00 0.00 0.00 00:08:47.691 =================================================================================================================== 00:08:47.691 Total : 7207.25 28.15 0.00 0.00 0.00 0.00 0.00 00:08:47.691 00:08:48.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.628 Nvme0n1 : 9.00 7210.78 28.17 0.00 0.00 0.00 0.00 0.00 00:08:48.628 =================================================================================================================== 00:08:48.628 Total : 7210.78 28.17 0.00 0.00 0.00 0.00 0.00 00:08:48.628 00:08:49.566 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.566 Nvme0n1 : 10.00 7200.90 28.13 0.00 0.00 0.00 0.00 0.00 00:08:49.566 =================================================================================================================== 00:08:49.566 Total : 7200.90 28.13 0.00 0.00 0.00 0.00 0.00 00:08:49.566 00:08:49.566 00:08:49.566 Latency(us) 00:08:49.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.566 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.566 Nvme0n1 : 10.02 7202.00 28.13 0.00 0.00 17767.40 14298.76 40751.48 00:08:49.566 =================================================================================================================== 00:08:49.566 Total : 7202.00 28.13 0.00 0.00 17767.40 14298.76 40751.48 00:08:49.566 0 00:08:49.566 08:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65809 00:08:49.566 08:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 65809 ']' 00:08:49.566 08:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 65809 00:08:49.566 08:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:08:49.566 08:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:49.566 08:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 65809 00:08:49.566 08:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:08:49.566 08:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:08:49.566 08:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 65809' 00:08:49.566 killing process with pid 65809 00:08:49.566 08:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 65809 00:08:49.566 Received shutdown signal, test time was about 10.000000 seconds 00:08:49.566 00:08:49.566 Latency(us) 00:08:49.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.566 =================================================================================================================== 00:08:49.566 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:49.566 08:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 65809 00:08:49.826 08:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:50.084 08:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:50.344 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ed359f0-9df3-4748-8490-26c684d5da37 00:08:50.344 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:50.602 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:50.602 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:50.602 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:50.860 [2024-06-10 08:04:12.695486] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:51.119 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ed359f0-9df3-4748-8490-26c684d5da37 00:08:51.119 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:08:51.119 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ed359f0-9df3-4748-8490-26c684d5da37 00:08:51.119 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:51.119 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:51.119 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:51.119 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:51.119 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:51.119 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:51.119 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:51.119 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:51.119 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ed359f0-9df3-4748-8490-26c684d5da37 00:08:51.119 request: 00:08:51.119 { 00:08:51.119 "uuid": "2ed359f0-9df3-4748-8490-26c684d5da37", 00:08:51.119 "method": "bdev_lvol_get_lvstores", 00:08:51.119 "req_id": 1 00:08:51.119 } 00:08:51.119 Got JSON-RPC error response 00:08:51.119 response: 00:08:51.119 { 00:08:51.119 "code": -19, 00:08:51.119 "message": "No such device" 00:08:51.119 } 00:08:51.119 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:08:51.119 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:51.119 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:51.119 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:51.119 08:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:51.378 aio_bdev 00:08:51.378 08:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9b0e74ad-7f1c-4613-8575-21ad63ddbacc 00:08:51.378 08:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=9b0e74ad-7f1c-4613-8575-21ad63ddbacc 00:08:51.378 08:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:08:51.378 08:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:08:51.378 08:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:08:51.378 08:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:08:51.378 08:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:51.637 08:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9b0e74ad-7f1c-4613-8575-21ad63ddbacc -t 2000 00:08:51.896 [ 00:08:51.896 { 00:08:51.896 "name": "9b0e74ad-7f1c-4613-8575-21ad63ddbacc", 00:08:51.896 "aliases": [ 00:08:51.896 "lvs/lvol" 00:08:51.896 ], 00:08:51.896 "product_name": "Logical Volume", 00:08:51.896 "block_size": 4096, 00:08:51.896 "num_blocks": 38912, 00:08:51.896 "uuid": "9b0e74ad-7f1c-4613-8575-21ad63ddbacc", 00:08:51.896 "assigned_rate_limits": { 00:08:51.896 "rw_ios_per_sec": 0, 00:08:51.896 "rw_mbytes_per_sec": 0, 00:08:51.896 "r_mbytes_per_sec": 0, 00:08:51.896 "w_mbytes_per_sec": 0 00:08:51.896 }, 00:08:51.896 "claimed": false, 00:08:51.896 "zoned": false, 00:08:51.896 "supported_io_types": { 00:08:51.896 "read": true, 00:08:51.896 "write": true, 00:08:51.896 "unmap": true, 00:08:51.896 "write_zeroes": true, 00:08:51.896 "flush": false, 00:08:51.896 "reset": true, 00:08:51.896 "compare": false, 00:08:51.896 "compare_and_write": false, 00:08:51.896 "abort": false, 00:08:51.896 "nvme_admin": false, 00:08:51.896 "nvme_io": false 00:08:51.896 }, 00:08:51.896 "driver_specific": { 00:08:51.896 "lvol": { 00:08:51.896 "lvol_store_uuid": "2ed359f0-9df3-4748-8490-26c684d5da37", 00:08:51.896 "base_bdev": "aio_bdev", 00:08:51.896 "thin_provision": false, 00:08:51.896 "num_allocated_clusters": 38, 00:08:51.896 "snapshot": false, 00:08:51.896 "clone": false, 00:08:51.896 "esnap_clone": false 00:08:51.896 } 00:08:51.896 } 00:08:51.896 } 00:08:51.896 ] 00:08:51.896 08:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:08:51.896 08:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:51.897 08:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ed359f0-9df3-4748-8490-26c684d5da37 00:08:52.155 08:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:52.155 08:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ed359f0-9df3-4748-8490-26c684d5da37 00:08:52.155 08:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:52.414 08:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:52.414 08:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9b0e74ad-7f1c-4613-8575-21ad63ddbacc 00:08:52.671 08:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2ed359f0-9df3-4748-8490-26c684d5da37 00:08:52.929 08:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:53.188 08:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:53.756 00:08:53.756 real 0m18.440s 00:08:53.756 user 0m17.304s 00:08:53.756 sys 0m2.601s 00:08:53.756 ************************************ 00:08:53.756 END TEST lvs_grow_clean 00:08:53.756 ************************************ 00:08:53.756 08:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:53.756 08:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:53.756 08:04:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:53.756 08:04:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:53.756 08:04:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:53.756 08:04:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:53.756 ************************************ 00:08:53.756 START TEST lvs_grow_dirty 00:08:53.756 ************************************ 00:08:53.756 08:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:08:53.756 08:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:53.756 08:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:53.756 08:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:53.756 08:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:53.756 08:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:53.756 08:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:53.756 08:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:53.756 08:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:53.756 08:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:54.014 08:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:54.015 08:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:54.273 08:04:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2c17e8fe-0f8d-476c-ba80-ed67e216a4f9 00:08:54.273 08:04:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c17e8fe-0f8d-476c-ba80-ed67e216a4f9 00:08:54.273 08:04:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:54.839 08:04:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:54.839 08:04:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:54.839 08:04:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2c17e8fe-0f8d-476c-ba80-ed67e216a4f9 lvol 150 00:08:54.839 08:04:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=03a86bfa-830c-4e10-baf2-8697a12ae92d 00:08:54.839 08:04:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:54.839 08:04:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:55.098 [2024-06-10 08:04:16.889582] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:55.098 [2024-06-10 08:04:16.889695] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:55.098 true 00:08:55.098 08:04:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:55.098 08:04:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c17e8fe-0f8d-476c-ba80-ed67e216a4f9 00:08:55.357 08:04:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:55.357 08:04:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:55.616 08:04:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 03a86bfa-830c-4e10-baf2-8697a12ae92d 00:08:55.874 08:04:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:56.132 [2024-06-10 08:04:17.918382] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.133 08:04:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:56.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:56.392 08:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66078 00:08:56.392 08:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:56.392 08:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:56.392 08:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66078 /var/tmp/bdevperf.sock 00:08:56.392 08:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 66078 ']' 00:08:56.392 08:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:56.392 08:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:56.392 08:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:56.392 08:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:56.392 08:04:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:56.392 [2024-06-10 08:04:18.235596] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:08:56.392 [2024-06-10 08:04:18.235982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66078 ] 00:08:56.651 [2024-06-10 08:04:18.368270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.651 [2024-06-10 08:04:18.488666] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.910 [2024-06-10 08:04:18.549076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:57.479 08:04:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:57.479 08:04:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:08:57.479 08:04:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:57.738 Nvme0n1 00:08:57.738 08:04:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:57.997 [ 00:08:57.997 { 00:08:57.997 "name": "Nvme0n1", 00:08:57.997 "aliases": [ 00:08:57.997 "03a86bfa-830c-4e10-baf2-8697a12ae92d" 00:08:57.997 ], 00:08:57.997 "product_name": "NVMe disk", 00:08:57.997 "block_size": 4096, 00:08:57.997 "num_blocks": 38912, 00:08:57.997 "uuid": "03a86bfa-830c-4e10-baf2-8697a12ae92d", 00:08:57.997 "assigned_rate_limits": { 00:08:57.997 "rw_ios_per_sec": 0, 00:08:57.997 "rw_mbytes_per_sec": 0, 00:08:57.997 "r_mbytes_per_sec": 0, 00:08:57.997 "w_mbytes_per_sec": 0 00:08:57.997 }, 00:08:57.997 "claimed": false, 00:08:57.997 "zoned": false, 00:08:57.997 "supported_io_types": { 00:08:57.997 "read": true, 00:08:57.997 "write": true, 00:08:57.997 "unmap": true, 00:08:57.997 "write_zeroes": true, 00:08:57.997 "flush": true, 00:08:57.997 "reset": true, 00:08:57.997 "compare": true, 00:08:57.997 "compare_and_write": true, 00:08:57.997 "abort": true, 00:08:57.997 "nvme_admin": true, 00:08:57.997 "nvme_io": true 00:08:57.997 }, 00:08:57.997 "memory_domains": [ 00:08:57.997 { 00:08:57.997 "dma_device_id": "system", 00:08:57.997 "dma_device_type": 1 00:08:57.997 } 00:08:57.997 ], 00:08:57.997 "driver_specific": { 00:08:57.997 "nvme": [ 00:08:57.997 { 00:08:57.997 "trid": { 00:08:57.997 "trtype": "TCP", 00:08:57.997 "adrfam": "IPv4", 00:08:57.997 "traddr": "10.0.0.2", 00:08:57.997 "trsvcid": "4420", 00:08:57.997 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:57.997 }, 00:08:57.997 "ctrlr_data": { 00:08:57.997 "cntlid": 1, 00:08:57.997 "vendor_id": "0x8086", 00:08:57.997 "model_number": "SPDK bdev Controller", 00:08:57.997 "serial_number": "SPDK0", 00:08:57.997 "firmware_revision": "24.09", 00:08:57.997 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:57.997 "oacs": { 00:08:57.997 "security": 0, 00:08:57.997 "format": 0, 00:08:57.997 "firmware": 0, 00:08:57.997 "ns_manage": 0 00:08:57.997 }, 00:08:57.997 "multi_ctrlr": true, 00:08:57.997 "ana_reporting": false 00:08:57.997 }, 00:08:57.997 "vs": { 00:08:57.997 "nvme_version": "1.3" 00:08:57.997 }, 00:08:57.997 "ns_data": { 00:08:57.997 "id": 1, 00:08:57.997 "can_share": true 00:08:57.997 } 00:08:57.997 } 00:08:57.997 ], 00:08:57.997 "mp_policy": "active_passive" 00:08:57.997 } 00:08:57.997 } 00:08:57.997 ] 00:08:57.997 08:04:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66102 00:08:57.997 08:04:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:57.997 08:04:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:58.256 Running I/O for 10 seconds... 00:08:59.192 Latency(us) 00:08:59.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.192 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.192 Nvme0n1 : 1.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:59.192 =================================================================================================================== 00:08:59.192 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:59.192 00:09:00.128 08:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2c17e8fe-0f8d-476c-ba80-ed67e216a4f9 00:09:00.128 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.128 Nvme0n1 : 2.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:09:00.128 =================================================================================================================== 00:09:00.128 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:09:00.128 00:09:00.391 true 00:09:00.392 08:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c17e8fe-0f8d-476c-ba80-ed67e216a4f9 00:09:00.392 08:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:00.652 08:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:00.652 08:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:00.652 08:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66102 00:09:01.219 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.219 Nvme0n1 : 3.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:09:01.219 =================================================================================================================== 00:09:01.219 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:09:01.219 00:09:02.177 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.177 Nvme0n1 : 4.00 7143.75 27.91 0.00 0.00 0.00 0.00 0.00 00:09:02.177 =================================================================================================================== 00:09:02.177 Total : 7143.75 27.91 0.00 0.00 0.00 0.00 0.00 00:09:02.177 00:09:03.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.113 Nvme0n1 : 5.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:09:03.113 =================================================================================================================== 00:09:03.113 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:09:03.113 00:09:04.049 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.049 Nvme0n1 : 6.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:04.049 =================================================================================================================== 00:09:04.049 Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:04.049 00:09:05.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.424 Nvme0n1 : 7.00 6912.71 27.00 0.00 0.00 0.00 0.00 0.00 00:09:05.424 =================================================================================================================== 00:09:05.424 Total : 6912.71 27.00 0.00 0.00 0.00 0.00 0.00 00:09:05.424 00:09:06.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.361 Nvme0n1 : 8.00 6890.00 26.91 0.00 0.00 0.00 0.00 0.00 00:09:06.361 =================================================================================================================== 00:09:06.361 Total : 6890.00 26.91 0.00 0.00 0.00 0.00 0.00 00:09:06.361 00:09:07.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.350 Nvme0n1 : 9.00 6844.11 26.73 0.00 0.00 0.00 0.00 0.00 00:09:07.350 =================================================================================================================== 00:09:07.350 Total : 6844.11 26.73 0.00 0.00 0.00 0.00 0.00 00:09:07.350 00:09:08.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.291 Nvme0n1 : 10.00 6832.80 26.69 0.00 0.00 0.00 0.00 0.00 00:09:08.291 =================================================================================================================== 00:09:08.291 Total : 6832.80 26.69 0.00 0.00 0.00 0.00 0.00 00:09:08.291 00:09:08.291 00:09:08.291 Latency(us) 00:09:08.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.291 Nvme0n1 : 10.02 6834.10 26.70 0.00 0.00 18723.27 6553.60 131548.63 00:09:08.291 =================================================================================================================== 00:09:08.291 Total : 6834.10 26.70 0.00 0.00 18723.27 6553.60 131548.63 00:09:08.291 0 00:09:08.291 08:04:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66078 00:09:08.291 08:04:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 66078 ']' 00:09:08.291 08:04:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 66078 00:09:08.291 08:04:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:09:08.291 08:04:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:08.291 08:04:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 66078 00:09:08.291 killing process with pid 66078 00:09:08.291 Received shutdown signal, test time was about 10.000000 seconds 00:09:08.291 00:09:08.291 Latency(us) 00:09:08.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.291 =================================================================================================================== 00:09:08.291 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:08.291 08:04:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:09:08.291 08:04:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:09:08.291 08:04:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 66078' 00:09:08.291 08:04:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 66078 00:09:08.291 08:04:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 66078 00:09:08.551 08:04:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:08.810 08:04:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:09.069 08:04:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:09.069 08:04:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c17e8fe-0f8d-476c-ba80-ed67e216a4f9 00:09:09.329 08:04:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:09.329 08:04:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:09.329 08:04:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65721 00:09:09.329 08:04:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65721 00:09:09.329 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65721 Killed "${NVMF_APP[@]}" "$@" 00:09:09.329 08:04:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:09.329 08:04:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:09.329 08:04:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:09.329 08:04:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:09.329 08:04:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:09.329 08:04:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=66240 00:09:09.329 08:04:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:09.329 08:04:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 66240 00:09:09.329 08:04:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 66240 ']' 00:09:09.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.329 08:04:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.329 08:04:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:09.329 08:04:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.329 08:04:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:09.329 08:04:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:09.329 [2024-06-10 08:04:31.133057] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:09:09.329 [2024-06-10 08:04:31.133564] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.588 [2024-06-10 08:04:31.277376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.588 [2024-06-10 08:04:31.381269] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.588 [2024-06-10 08:04:31.381340] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.588 [2024-06-10 08:04:31.381365] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.588 [2024-06-10 08:04:31.381373] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.588 [2024-06-10 08:04:31.381378] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.588 [2024-06-10 08:04:31.381422] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.588 [2024-06-10 08:04:31.438293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:10.533 08:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:10.533 08:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:09:10.533 08:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:10.533 08:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:10.533 08:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:10.533 08:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.533 08:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:10.533 [2024-06-10 08:04:32.399629] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:10.791 [2024-06-10 08:04:32.399893] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:10.792 [2024-06-10 08:04:32.400146] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:10.792 08:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:10.792 08:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 03a86bfa-830c-4e10-baf2-8697a12ae92d 00:09:10.792 08:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=03a86bfa-830c-4e10-baf2-8697a12ae92d 00:09:10.792 08:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:09:10.792 08:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:09:10.792 08:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:09:10.792 08:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:09:10.792 08:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:11.050 08:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 03a86bfa-830c-4e10-baf2-8697a12ae92d -t 2000 00:09:11.050 [ 00:09:11.050 { 00:09:11.050 "name": "03a86bfa-830c-4e10-baf2-8697a12ae92d", 00:09:11.050 "aliases": [ 00:09:11.050 "lvs/lvol" 00:09:11.050 ], 00:09:11.050 "product_name": "Logical Volume", 00:09:11.050 "block_size": 4096, 00:09:11.050 "num_blocks": 38912, 00:09:11.050 "uuid": "03a86bfa-830c-4e10-baf2-8697a12ae92d", 00:09:11.050 "assigned_rate_limits": { 00:09:11.050 "rw_ios_per_sec": 0, 00:09:11.050 "rw_mbytes_per_sec": 0, 00:09:11.050 "r_mbytes_per_sec": 0, 00:09:11.050 "w_mbytes_per_sec": 0 00:09:11.050 }, 00:09:11.050 "claimed": false, 00:09:11.050 "zoned": false, 00:09:11.050 "supported_io_types": { 00:09:11.050 "read": true, 00:09:11.050 "write": true, 00:09:11.050 "unmap": true, 00:09:11.050 "write_zeroes": true, 00:09:11.050 "flush": false, 00:09:11.050 "reset": true, 00:09:11.050 "compare": false, 00:09:11.050 "compare_and_write": false, 00:09:11.050 "abort": false, 00:09:11.050 "nvme_admin": false, 00:09:11.050 "nvme_io": false 00:09:11.050 }, 00:09:11.050 "driver_specific": { 00:09:11.050 "lvol": { 00:09:11.050 "lvol_store_uuid": "2c17e8fe-0f8d-476c-ba80-ed67e216a4f9", 00:09:11.050 "base_bdev": "aio_bdev", 00:09:11.050 "thin_provision": false, 00:09:11.050 "num_allocated_clusters": 38, 00:09:11.050 "snapshot": false, 00:09:11.050 "clone": false, 00:09:11.050 "esnap_clone": false 00:09:11.050 } 00:09:11.050 } 00:09:11.050 } 00:09:11.050 ] 00:09:11.050 08:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:09:11.050 08:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c17e8fe-0f8d-476c-ba80-ed67e216a4f9 00:09:11.050 08:04:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:11.618 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:11.618 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c17e8fe-0f8d-476c-ba80-ed67e216a4f9 00:09:11.618 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:11.618 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:11.618 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:11.879 [2024-06-10 08:04:33.620871] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:11.879 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c17e8fe-0f8d-476c-ba80-ed67e216a4f9 00:09:11.879 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:09:11.879 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c17e8fe-0f8d-476c-ba80-ed67e216a4f9 00:09:11.879 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.879 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:11.879 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.879 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:11.879 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.879 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:11.879 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.879 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:11.879 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c17e8fe-0f8d-476c-ba80-ed67e216a4f9 00:09:12.139 request: 00:09:12.139 { 00:09:12.139 "uuid": "2c17e8fe-0f8d-476c-ba80-ed67e216a4f9", 00:09:12.139 "method": "bdev_lvol_get_lvstores", 00:09:12.139 "req_id": 1 00:09:12.139 } 00:09:12.139 Got JSON-RPC error response 00:09:12.139 response: 00:09:12.139 { 00:09:12.139 "code": -19, 00:09:12.139 "message": "No such device" 00:09:12.139 } 00:09:12.139 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:09:12.139 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:12.139 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:12.139 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:12.139 08:04:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:12.398 aio_bdev 00:09:12.398 08:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 03a86bfa-830c-4e10-baf2-8697a12ae92d 00:09:12.398 08:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=03a86bfa-830c-4e10-baf2-8697a12ae92d 00:09:12.398 08:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:09:12.398 08:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:09:12.398 08:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:09:12.398 08:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:09:12.398 08:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:12.657 08:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 03a86bfa-830c-4e10-baf2-8697a12ae92d -t 2000 00:09:12.916 [ 00:09:12.916 { 00:09:12.916 "name": "03a86bfa-830c-4e10-baf2-8697a12ae92d", 00:09:12.916 "aliases": [ 00:09:12.916 "lvs/lvol" 00:09:12.916 ], 00:09:12.916 "product_name": "Logical Volume", 00:09:12.916 "block_size": 4096, 00:09:12.916 "num_blocks": 38912, 00:09:12.916 "uuid": "03a86bfa-830c-4e10-baf2-8697a12ae92d", 00:09:12.916 "assigned_rate_limits": { 00:09:12.916 "rw_ios_per_sec": 0, 00:09:12.916 "rw_mbytes_per_sec": 0, 00:09:12.916 "r_mbytes_per_sec": 0, 00:09:12.916 "w_mbytes_per_sec": 0 00:09:12.916 }, 00:09:12.916 "claimed": false, 00:09:12.916 "zoned": false, 00:09:12.916 "supported_io_types": { 00:09:12.916 "read": true, 00:09:12.916 "write": true, 00:09:12.916 "unmap": true, 00:09:12.916 "write_zeroes": true, 00:09:12.916 "flush": false, 00:09:12.916 "reset": true, 00:09:12.916 "compare": false, 00:09:12.916 "compare_and_write": false, 00:09:12.916 "abort": false, 00:09:12.916 "nvme_admin": false, 00:09:12.916 "nvme_io": false 00:09:12.916 }, 00:09:12.916 "driver_specific": { 00:09:12.916 "lvol": { 00:09:12.916 "lvol_store_uuid": "2c17e8fe-0f8d-476c-ba80-ed67e216a4f9", 00:09:12.916 "base_bdev": "aio_bdev", 00:09:12.916 "thin_provision": false, 00:09:12.916 "num_allocated_clusters": 38, 00:09:12.916 "snapshot": false, 00:09:12.916 "clone": false, 00:09:12.916 "esnap_clone": false 00:09:12.916 } 00:09:12.916 } 00:09:12.916 } 00:09:12.916 ] 00:09:12.916 08:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:09:12.916 08:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c17e8fe-0f8d-476c-ba80-ed67e216a4f9 00:09:12.916 08:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:13.175 08:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:13.175 08:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c17e8fe-0f8d-476c-ba80-ed67e216a4f9 00:09:13.175 08:04:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:13.434 08:04:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:13.434 08:04:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 03a86bfa-830c-4e10-baf2-8697a12ae92d 00:09:13.693 08:04:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2c17e8fe-0f8d-476c-ba80-ed67e216a4f9 00:09:13.951 08:04:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:14.209 08:04:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:14.467 ************************************ 00:09:14.467 END TEST lvs_grow_dirty 00:09:14.467 ************************************ 00:09:14.467 00:09:14.467 real 0m20.651s 00:09:14.467 user 0m43.128s 00:09:14.467 sys 0m8.783s 00:09:14.468 08:04:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:14.468 08:04:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:14.468 08:04:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:14.468 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:09:14.468 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:09:14.468 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:09:14.468 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:14.468 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:09:14.468 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:09:14.468 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:09:14.468 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:14.468 nvmf_trace.0 00:09:14.468 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:09:14.468 08:04:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:14.468 08:04:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:14.468 08:04:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:14.725 08:04:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:14.725 08:04:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:14.725 08:04:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:14.725 08:04:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:14.725 rmmod nvme_tcp 00:09:14.725 rmmod nvme_fabrics 00:09:14.725 rmmod nvme_keyring 00:09:14.725 08:04:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:14.725 08:04:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:14.725 08:04:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:14.725 08:04:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 66240 ']' 00:09:14.725 08:04:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 66240 00:09:14.725 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 66240 ']' 00:09:14.725 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 66240 00:09:14.725 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:09:14.725 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:14.725 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 66240 00:09:14.725 killing process with pid 66240 00:09:14.725 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:14.725 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:14.725 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 66240' 00:09:14.725 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 66240 00:09:14.725 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 66240 00:09:14.989 08:04:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:14.989 08:04:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:14.989 08:04:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:14.989 08:04:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:14.989 08:04:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:14.989 08:04:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.989 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:14.989 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.989 08:04:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:14.989 ************************************ 00:09:14.989 END TEST nvmf_lvs_grow 00:09:14.989 ************************************ 00:09:14.989 00:09:14.989 real 0m41.500s 00:09:14.989 user 1m6.557s 00:09:14.989 sys 0m12.093s 00:09:14.989 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:14.989 08:04:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:14.989 08:04:36 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:14.989 08:04:36 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:14.989 08:04:36 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:14.989 08:04:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:14.989 ************************************ 00:09:14.989 START TEST nvmf_bdev_io_wait 00:09:14.989 ************************************ 00:09:14.989 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:15.248 * Looking for test storage... 00:09:15.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:15.248 Cannot find device "nvmf_tgt_br" 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:15.248 Cannot find device "nvmf_tgt_br2" 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:15.248 Cannot find device "nvmf_tgt_br" 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:15.248 Cannot find device "nvmf_tgt_br2" 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:09:15.248 08:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:15.248 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:15.248 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:15.248 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:15.248 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:15.248 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:15.249 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:15.249 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:15.249 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:15.249 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:15.249 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:15.249 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:15.249 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:15.249 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:15.249 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:15.249 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:15.249 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:15.249 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:15.249 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:15.249 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:15.249 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:15.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:09:15.507 00:09:15.507 --- 10.0.0.2 ping statistics --- 00:09:15.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.507 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:15.507 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:15.507 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:09:15.507 00:09:15.507 --- 10.0.0.3 ping statistics --- 00:09:15.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.507 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:15.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:15.507 00:09:15.507 --- 10.0.0.1 ping statistics --- 00:09:15.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.507 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=66559 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 66559 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 66559 ']' 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:15.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:15.507 08:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.507 [2024-06-10 08:04:37.294031] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:09:15.507 [2024-06-10 08:04:37.294145] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.766 [2024-06-10 08:04:37.434387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.766 [2024-06-10 08:04:37.550277] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.766 [2024-06-10 08:04:37.550619] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.766 [2024-06-10 08:04:37.550798] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.766 [2024-06-10 08:04:37.550932] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.766 [2024-06-10 08:04:37.550968] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.766 [2024-06-10 08:04:37.551199] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.766 [2024-06-10 08:04:37.551333] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.766 [2024-06-10 08:04:37.551404] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.766 [2024-06-10 08:04:37.551404] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:16.700 [2024-06-10 08:04:38.304298] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:16.700 [2024-06-10 08:04:38.316426] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:16.700 Malloc0 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:16.700 [2024-06-10 08:04:38.384509] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66595 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66597 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:16.700 { 00:09:16.700 "params": { 00:09:16.700 "name": "Nvme$subsystem", 00:09:16.700 "trtype": "$TEST_TRANSPORT", 00:09:16.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:16.700 "adrfam": "ipv4", 00:09:16.700 "trsvcid": "$NVMF_PORT", 00:09:16.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:16.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:16.700 "hdgst": ${hdgst:-false}, 00:09:16.700 "ddgst": ${ddgst:-false} 00:09:16.700 }, 00:09:16.700 "method": "bdev_nvme_attach_controller" 00:09:16.700 } 00:09:16.700 EOF 00:09:16.700 )") 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66599 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:16.700 { 00:09:16.700 "params": { 00:09:16.700 "name": "Nvme$subsystem", 00:09:16.700 "trtype": "$TEST_TRANSPORT", 00:09:16.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:16.700 "adrfam": "ipv4", 00:09:16.700 "trsvcid": "$NVMF_PORT", 00:09:16.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:16.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:16.700 "hdgst": ${hdgst:-false}, 00:09:16.700 "ddgst": ${ddgst:-false} 00:09:16.700 }, 00:09:16.700 "method": "bdev_nvme_attach_controller" 00:09:16.700 } 00:09:16.700 EOF 00:09:16.700 )") 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66602 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:16.700 { 00:09:16.700 "params": { 00:09:16.700 "name": "Nvme$subsystem", 00:09:16.700 "trtype": "$TEST_TRANSPORT", 00:09:16.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:16.700 "adrfam": "ipv4", 00:09:16.700 "trsvcid": "$NVMF_PORT", 00:09:16.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:16.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:16.700 "hdgst": ${hdgst:-false}, 00:09:16.700 "ddgst": ${ddgst:-false} 00:09:16.700 }, 00:09:16.700 "method": "bdev_nvme_attach_controller" 00:09:16.700 } 00:09:16.700 EOF 00:09:16.700 )") 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:16.700 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:16.700 { 00:09:16.700 "params": { 00:09:16.701 "name": "Nvme$subsystem", 00:09:16.701 "trtype": "$TEST_TRANSPORT", 00:09:16.701 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:16.701 "adrfam": "ipv4", 00:09:16.701 "trsvcid": "$NVMF_PORT", 00:09:16.701 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:16.701 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:16.701 "hdgst": ${hdgst:-false}, 00:09:16.701 "ddgst": ${ddgst:-false} 00:09:16.701 }, 00:09:16.701 "method": "bdev_nvme_attach_controller" 00:09:16.701 } 00:09:16.701 EOF 00:09:16.701 )") 00:09:16.701 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:16.701 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:16.701 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:16.701 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:16.701 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:16.701 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:16.701 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:16.701 "params": { 00:09:16.701 "name": "Nvme1", 00:09:16.701 "trtype": "tcp", 00:09:16.701 "traddr": "10.0.0.2", 00:09:16.701 "adrfam": "ipv4", 00:09:16.701 "trsvcid": "4420", 00:09:16.701 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:16.701 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:16.701 "hdgst": false, 00:09:16.701 "ddgst": false 00:09:16.701 }, 00:09:16.701 "method": "bdev_nvme_attach_controller" 00:09:16.701 }' 00:09:16.701 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:16.701 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:16.701 "params": { 00:09:16.701 "name": "Nvme1", 00:09:16.701 "trtype": "tcp", 00:09:16.701 "traddr": "10.0.0.2", 00:09:16.701 "adrfam": "ipv4", 00:09:16.701 "trsvcid": "4420", 00:09:16.701 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:16.701 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:16.701 "hdgst": false, 00:09:16.701 "ddgst": false 00:09:16.701 }, 00:09:16.701 "method": "bdev_nvme_attach_controller" 00:09:16.701 }' 00:09:16.701 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:16.701 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:16.701 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:16.701 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:16.701 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:16.701 "params": { 00:09:16.701 "name": "Nvme1", 00:09:16.701 "trtype": "tcp", 00:09:16.701 "traddr": "10.0.0.2", 00:09:16.701 "adrfam": "ipv4", 00:09:16.701 "trsvcid": "4420", 00:09:16.701 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:16.701 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:16.701 "hdgst": false, 00:09:16.701 "ddgst": false 00:09:16.701 }, 00:09:16.701 "method": "bdev_nvme_attach_controller" 00:09:16.701 }' 00:09:16.701 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:16.701 "params": { 00:09:16.701 "name": "Nvme1", 00:09:16.701 "trtype": "tcp", 00:09:16.701 "traddr": "10.0.0.2", 00:09:16.701 "adrfam": "ipv4", 00:09:16.701 "trsvcid": "4420", 00:09:16.701 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:16.701 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:16.701 "hdgst": false, 00:09:16.701 "ddgst": false 00:09:16.701 }, 00:09:16.701 "method": "bdev_nvme_attach_controller" 00:09:16.701 }' 00:09:16.701 [2024-06-10 08:04:38.440397] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:09:16.701 [2024-06-10 08:04:38.440936] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:16.701 08:04:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66595 00:09:16.701 [2024-06-10 08:04:38.474503] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:09:16.701 [2024-06-10 08:04:38.474656] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:16.701 [2024-06-10 08:04:38.486677] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:09:16.701 [2024-06-10 08:04:38.487574] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:16.701 [2024-06-10 08:04:38.509815] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:09:16.701 [2024-06-10 08:04:38.510231] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:16.959 [2024-06-10 08:04:38.641093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.959 [2024-06-10 08:04:38.715048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.959 [2024-06-10 08:04:38.753820] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:09:16.959 [2024-06-10 08:04:38.794157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.959 [2024-06-10 08:04:38.802590] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:09:16.959 [2024-06-10 08:04:38.824295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:17.218 [2024-06-10 08:04:38.869829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:17.218 [2024-06-10 08:04:38.890401] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:09:17.218 [2024-06-10 08:04:38.892339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.218 [2024-06-10 08:04:38.951568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:17.218 Running I/O for 1 seconds... 00:09:17.218 Running I/O for 1 seconds... 00:09:17.218 [2024-06-10 08:04:39.013374] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:09:17.218 Running I/O for 1 seconds... 00:09:17.218 [2024-06-10 08:04:39.073755] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:17.476 Running I/O for 1 seconds... 00:09:18.411 00:09:18.411 Latency(us) 00:09:18.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.411 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:18.411 Nvme1n1 : 1.00 170918.20 667.65 0.00 0.00 746.16 338.85 1400.09 00:09:18.411 =================================================================================================================== 00:09:18.411 Total : 170918.20 667.65 0.00 0.00 746.16 338.85 1400.09 00:09:18.411 00:09:18.411 Latency(us) 00:09:18.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.411 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:18.411 Nvme1n1 : 1.02 6509.29 25.43 0.00 0.00 19495.45 8936.73 34793.66 00:09:18.411 =================================================================================================================== 00:09:18.411 Total : 6509.29 25.43 0.00 0.00 19495.45 8936.73 34793.66 00:09:18.411 00:09:18.411 Latency(us) 00:09:18.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.411 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:18.411 Nvme1n1 : 1.01 6196.10 24.20 0.00 0.00 20579.48 6732.33 41466.41 00:09:18.411 =================================================================================================================== 00:09:18.411 Total : 6196.10 24.20 0.00 0.00 20579.48 6732.33 41466.41 00:09:18.411 00:09:18.411 Latency(us) 00:09:18.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.411 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:18.411 Nvme1n1 : 1.01 8404.38 32.83 0.00 0.00 15149.34 9949.56 25737.77 00:09:18.411 =================================================================================================================== 00:09:18.411 Total : 8404.38 32.83 0.00 0.00 15149.34 9949.56 25737.77 00:09:18.669 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66597 00:09:18.669 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66599 00:09:18.669 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66602 00:09:18.669 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:18.669 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:18.669 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:18.669 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:18.669 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:18.669 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:18.669 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:18.669 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:18.927 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:18.927 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:18.927 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.927 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:18.927 rmmod nvme_tcp 00:09:18.927 rmmod nvme_fabrics 00:09:18.927 rmmod nvme_keyring 00:09:18.927 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:18.927 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:18.927 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:18.927 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 66559 ']' 00:09:18.927 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 66559 00:09:18.927 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 66559 ']' 00:09:18.927 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 66559 00:09:18.927 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:09:18.927 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:18.927 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 66559 00:09:18.927 killing process with pid 66559 00:09:18.927 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:18.927 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:18.927 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 66559' 00:09:18.927 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 66559 00:09:18.927 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 66559 00:09:19.186 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:19.186 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:19.186 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:19.186 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.186 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:19.186 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.187 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.187 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.187 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:19.187 00:09:19.187 real 0m4.093s 00:09:19.187 user 0m18.097s 00:09:19.187 sys 0m2.280s 00:09:19.187 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:19.187 08:04:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:19.187 ************************************ 00:09:19.187 END TEST nvmf_bdev_io_wait 00:09:19.187 ************************************ 00:09:19.187 08:04:40 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:19.187 08:04:40 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:19.187 08:04:40 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:19.187 08:04:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:19.187 ************************************ 00:09:19.187 START TEST nvmf_queue_depth 00:09:19.187 ************************************ 00:09:19.187 08:04:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:19.187 * Looking for test storage... 00:09:19.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:19.187 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:19.446 Cannot find device "nvmf_tgt_br" 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:19.446 Cannot find device "nvmf_tgt_br2" 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:19.446 Cannot find device "nvmf_tgt_br" 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:19.446 Cannot find device "nvmf_tgt_br2" 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:19.446 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:19.446 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:19.446 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:19.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:19.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:09:19.705 00:09:19.705 --- 10.0.0.2 ping statistics --- 00:09:19.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.705 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:19.705 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:19.705 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:09:19.705 00:09:19.705 --- 10.0.0.3 ping statistics --- 00:09:19.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.705 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:19.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:19.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:09:19.705 00:09:19.705 --- 10.0.0.1 ping statistics --- 00:09:19.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.705 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66831 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66831 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 66831 ']' 00:09:19.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:19.705 08:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.705 [2024-06-10 08:04:41.427462] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:09:19.705 [2024-06-10 08:04:41.427577] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.705 [2024-06-10 08:04:41.565540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.964 [2024-06-10 08:04:41.677076] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:19.964 [2024-06-10 08:04:41.677127] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:19.964 [2024-06-10 08:04:41.677138] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:19.964 [2024-06-10 08:04:41.677146] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:19.964 [2024-06-10 08:04:41.677153] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:19.964 [2024-06-10 08:04:41.677184] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.964 [2024-06-10 08:04:41.731570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.898 [2024-06-10 08:04:42.485443] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.898 Malloc0 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.898 [2024-06-10 08:04:42.546045] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66863 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66863 /var/tmp/bdevperf.sock 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 66863 ']' 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:20.898 08:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.898 [2024-06-10 08:04:42.605625] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:09:20.898 [2024-06-10 08:04:42.606190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66863 ] 00:09:20.898 [2024-06-10 08:04:42.744762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.156 [2024-06-10 08:04:42.862427] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.156 [2024-06-10 08:04:42.929712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:22.092 08:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:22.092 08:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:09:22.092 08:04:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:22.092 08:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.092 08:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:22.092 NVMe0n1 00:09:22.092 08:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.092 08:04:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:22.092 Running I/O for 10 seconds... 00:09:32.099 00:09:32.099 Latency(us) 00:09:32.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.099 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:32.099 Verification LBA range: start 0x0 length 0x4000 00:09:32.099 NVMe0n1 : 10.07 7961.42 31.10 0.00 0.00 128040.98 13285.93 92465.34 00:09:32.099 =================================================================================================================== 00:09:32.099 Total : 7961.42 31.10 0.00 0.00 128040.98 13285.93 92465.34 00:09:32.099 0 00:09:32.099 08:04:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66863 00:09:32.099 08:04:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 66863 ']' 00:09:32.099 08:04:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 66863 00:09:32.099 08:04:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:09:32.099 08:04:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:32.099 08:04:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 66863 00:09:32.099 killing process with pid 66863 00:09:32.099 Received shutdown signal, test time was about 10.000000 seconds 00:09:32.099 00:09:32.099 Latency(us) 00:09:32.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.099 =================================================================================================================== 00:09:32.099 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:32.099 08:04:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:32.099 08:04:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:32.099 08:04:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 66863' 00:09:32.099 08:04:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 66863 00:09:32.099 08:04:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 66863 00:09:32.358 08:04:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:32.358 08:04:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:32.358 08:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:32.358 08:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:32.358 08:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:32.358 08:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:32.358 08:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:32.358 08:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:32.358 rmmod nvme_tcp 00:09:32.616 rmmod nvme_fabrics 00:09:32.616 rmmod nvme_keyring 00:09:32.616 08:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:32.616 08:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:32.616 08:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:32.616 08:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66831 ']' 00:09:32.617 08:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66831 00:09:32.617 08:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 66831 ']' 00:09:32.617 08:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 66831 00:09:32.617 08:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:09:32.617 08:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:32.617 08:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 66831 00:09:32.617 killing process with pid 66831 00:09:32.617 08:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:09:32.617 08:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:09:32.617 08:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 66831' 00:09:32.617 08:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 66831 00:09:32.617 08:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 66831 00:09:32.876 08:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:32.876 08:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:32.876 08:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:32.876 08:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:32.876 08:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:32.876 08:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.876 08:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:32.876 08:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.876 08:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:32.876 00:09:32.876 real 0m13.678s 00:09:32.876 user 0m23.752s 00:09:32.876 sys 0m2.217s 00:09:32.876 ************************************ 00:09:32.876 END TEST nvmf_queue_depth 00:09:32.876 ************************************ 00:09:32.876 08:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:32.876 08:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:32.876 08:04:54 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:32.876 08:04:54 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:32.876 08:04:54 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:32.876 08:04:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:32.876 ************************************ 00:09:32.876 START TEST nvmf_target_multipath 00:09:32.876 ************************************ 00:09:32.876 08:04:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:32.876 * Looking for test storage... 00:09:32.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:32.876 08:04:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:32.876 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:32.876 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.876 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.876 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.876 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.876 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.876 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.876 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.876 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.876 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.876 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.135 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:33.136 Cannot find device "nvmf_tgt_br" 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:33.136 Cannot find device "nvmf_tgt_br2" 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:33.136 Cannot find device "nvmf_tgt_br" 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:33.136 Cannot find device "nvmf_tgt_br2" 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:33.136 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:33.136 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:33.136 08:04:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:33.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:09:33.395 00:09:33.395 --- 10.0.0.2 ping statistics --- 00:09:33.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.395 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:33.395 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:33.395 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:09:33.395 00:09:33.395 --- 10.0.0.3 ping statistics --- 00:09:33.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.395 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:33.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:09:33.395 00:09:33.395 --- 10.0.0.1 ping statistics --- 00:09:33.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.395 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=67183 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 67183 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@830 -- # '[' -z 67183 ']' 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:33.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:33.395 08:04:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:33.395 [2024-06-10 08:04:55.181998] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:09:33.395 [2024-06-10 08:04:55.182145] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.654 [2024-06-10 08:04:55.327988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.654 [2024-06-10 08:04:55.455407] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.654 [2024-06-10 08:04:55.455485] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.654 [2024-06-10 08:04:55.455499] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.654 [2024-06-10 08:04:55.455510] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.654 [2024-06-10 08:04:55.455519] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.654 [2024-06-10 08:04:55.455685] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.654 [2024-06-10 08:04:55.455866] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.654 [2024-06-10 08:04:55.456634] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.654 [2024-06-10 08:04:55.456668] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.654 [2024-06-10 08:04:55.516466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:34.589 08:04:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:34.589 08:04:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@863 -- # return 0 00:09:34.589 08:04:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:34.589 08:04:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:34.589 08:04:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:34.590 08:04:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.590 08:04:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:34.590 [2024-06-10 08:04:56.449892] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.848 08:04:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:35.106 Malloc0 00:09:35.106 08:04:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:35.365 08:04:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:35.624 08:04:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.882 [2024-06-10 08:04:57.524081] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.882 08:04:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:36.141 [2024-06-10 08:04:57.752286] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:36.141 08:04:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid=0b063e5e-64f6-4b4f-b15f-bd51b74609ab -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:36.141 08:04:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid=0b063e5e-64f6-4b4f-b15f-bd51b74609ab -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:36.400 08:04:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:36.400 08:04:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1197 -- # local i=0 00:09:36.400 08:04:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:36.400 08:04:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:36.400 08:04:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # sleep 2 00:09:38.304 08:05:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:38.304 08:05:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:38.304 08:05:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:38.304 08:05:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:38.304 08:05:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:38.304 08:05:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # return 0 00:09:38.304 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:38.304 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:38.304 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:38.304 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:38.304 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:38.304 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:38.304 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:38.304 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:38.304 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:38.304 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:38.305 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:38.305 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:38.305 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:38.305 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:38.305 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:38.305 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:38.305 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:38.305 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:38.305 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:38.305 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:38.305 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:38.305 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:38.305 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:38.305 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:38.305 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:38.305 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:38.305 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67277 00:09:38.305 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:38.305 08:05:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:38.305 [global] 00:09:38.305 thread=1 00:09:38.305 invalidate=1 00:09:38.305 rw=randrw 00:09:38.305 time_based=1 00:09:38.305 runtime=6 00:09:38.305 ioengine=libaio 00:09:38.305 direct=1 00:09:38.305 bs=4096 00:09:38.305 iodepth=128 00:09:38.305 norandommap=0 00:09:38.305 numjobs=1 00:09:38.305 00:09:38.305 verify_dump=1 00:09:38.305 verify_backlog=512 00:09:38.305 verify_state_save=0 00:09:38.305 do_verify=1 00:09:38.305 verify=crc32c-intel 00:09:38.305 [job0] 00:09:38.305 filename=/dev/nvme0n1 00:09:38.305 Could not set queue depth (nvme0n1) 00:09:38.563 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:38.563 fio-3.35 00:09:38.563 Starting 1 thread 00:09:39.500 08:05:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:39.500 08:05:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:39.759 08:05:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:39.759 08:05:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:39.759 08:05:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:39.759 08:05:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:39.759 08:05:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:39.759 08:05:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:39.759 08:05:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:39.759 08:05:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:39.759 08:05:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:39.759 08:05:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:39.759 08:05:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:39.759 08:05:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:39.759 08:05:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:40.017 08:05:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:40.276 08:05:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:40.276 08:05:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:40.276 08:05:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:40.277 08:05:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:40.277 08:05:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:40.277 08:05:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:40.277 08:05:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:40.277 08:05:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:40.277 08:05:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:40.277 08:05:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:40.277 08:05:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:40.277 08:05:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:40.277 08:05:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67277 00:09:45.577 00:09:45.577 job0: (groupid=0, jobs=1): err= 0: pid=67299: Mon Jun 10 08:05:06 2024 00:09:45.577 read: IOPS=10.3k, BW=40.2MiB/s (42.2MB/s)(241MiB/6002msec) 00:09:45.577 slat (usec): min=6, max=8046, avg=57.22, stdev=218.78 00:09:45.577 clat (usec): min=1139, max=17192, avg=8452.32, stdev=1434.51 00:09:45.577 lat (usec): min=1611, max=17202, avg=8509.54, stdev=1437.21 00:09:45.577 clat percentiles (usec): 00:09:45.577 | 1.00th=[ 4490], 5.00th=[ 6652], 10.00th=[ 7242], 20.00th=[ 7701], 00:09:45.577 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8455], 00:09:45.577 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9765], 95.00th=[11731], 00:09:45.577 | 99.00th=[13173], 99.50th=[13435], 99.90th=[15664], 99.95th=[16450], 00:09:45.577 | 99.99th=[17171] 00:09:45.577 bw ( KiB/s): min= 5424, max=26848, per=51.29%, avg=21120.00, stdev=7255.52, samples=11 00:09:45.577 iops : min= 1356, max= 6712, avg=5280.00, stdev=1813.88, samples=11 00:09:45.577 write: IOPS=6245, BW=24.4MiB/s (25.6MB/s)(126MiB/5167msec); 0 zone resets 00:09:45.577 slat (usec): min=14, max=1612, avg=66.12, stdev=156.96 00:09:45.577 clat (usec): min=2166, max=16672, avg=7406.12, stdev=1288.83 00:09:45.577 lat (usec): min=2203, max=16697, avg=7472.24, stdev=1293.03 00:09:45.577 clat percentiles (usec): 00:09:45.577 | 1.00th=[ 3458], 5.00th=[ 4490], 10.00th=[ 6063], 20.00th=[ 6849], 00:09:45.577 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7701], 00:09:45.577 | 70.00th=[ 7898], 80.00th=[ 8160], 90.00th=[ 8586], 95.00th=[ 8979], 00:09:45.577 | 99.00th=[11207], 99.50th=[11731], 99.90th=[13042], 99.95th=[13698], 00:09:45.577 | 99.99th=[14746] 00:09:45.577 bw ( KiB/s): min= 5736, max=26296, per=85.00%, avg=21233.45, stdev=7103.84, samples=11 00:09:45.577 iops : min= 1434, max= 6574, avg=5308.36, stdev=1775.96, samples=11 00:09:45.577 lat (msec) : 2=0.03%, 4=1.29%, 10=92.44%, 20=6.24% 00:09:45.577 cpu : usr=5.27%, sys=22.71%, ctx=5565, majf=0, minf=114 00:09:45.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:45.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.577 issued rwts: total=61788,32268,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.577 00:09:45.577 Run status group 0 (all jobs): 00:09:45.577 READ: bw=40.2MiB/s (42.2MB/s), 40.2MiB/s-40.2MiB/s (42.2MB/s-42.2MB/s), io=241MiB (253MB), run=6002-6002msec 00:09:45.577 WRITE: bw=24.4MiB/s (25.6MB/s), 24.4MiB/s-24.4MiB/s (25.6MB/s-25.6MB/s), io=126MiB (132MB), run=5167-5167msec 00:09:45.577 00:09:45.577 Disk stats (read/write): 00:09:45.577 nvme0n1: ios=61006/31618, merge=0/0, ticks=494076/219475, in_queue=713551, util=98.68% 00:09:45.577 08:05:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:45.577 08:05:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:45.577 08:05:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:45.577 08:05:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:45.577 08:05:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:45.577 08:05:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:45.577 08:05:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:45.577 08:05:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:45.577 08:05:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:45.577 08:05:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:45.577 08:05:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:45.577 08:05:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:45.577 08:05:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:45.577 08:05:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:45.577 08:05:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:45.577 08:05:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67378 00:09:45.577 08:05:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:45.577 08:05:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:45.577 [global] 00:09:45.577 thread=1 00:09:45.577 invalidate=1 00:09:45.577 rw=randrw 00:09:45.577 time_based=1 00:09:45.577 runtime=6 00:09:45.577 ioengine=libaio 00:09:45.577 direct=1 00:09:45.577 bs=4096 00:09:45.577 iodepth=128 00:09:45.577 norandommap=0 00:09:45.577 numjobs=1 00:09:45.577 00:09:45.577 verify_dump=1 00:09:45.577 verify_backlog=512 00:09:45.577 verify_state_save=0 00:09:45.577 do_verify=1 00:09:45.577 verify=crc32c-intel 00:09:45.577 [job0] 00:09:45.577 filename=/dev/nvme0n1 00:09:45.577 Could not set queue depth (nvme0n1) 00:09:45.577 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:45.577 fio-3.35 00:09:45.577 Starting 1 thread 00:09:46.188 08:05:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:46.756 08:05:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:47.015 08:05:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:47.015 08:05:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:47.015 08:05:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:47.015 08:05:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:47.015 08:05:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:47.015 08:05:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:47.015 08:05:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:47.015 08:05:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:47.015 08:05:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:47.015 08:05:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:47.015 08:05:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:47.015 08:05:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:47.015 08:05:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:47.274 08:05:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:47.274 08:05:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:47.274 08:05:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:47.274 08:05:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:47.274 08:05:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:47.274 08:05:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:47.274 08:05:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:47.274 08:05:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:47.274 08:05:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:47.274 08:05:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:47.274 08:05:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:47.274 08:05:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:47.274 08:05:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:47.274 08:05:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67378 00:09:51.489 00:09:51.489 job0: (groupid=0, jobs=1): err= 0: pid=67399: Mon Jun 10 08:05:13 2024 00:09:51.489 read: IOPS=11.6k, BW=45.4MiB/s (47.6MB/s)(273MiB/6006msec) 00:09:51.489 slat (usec): min=5, max=6904, avg=43.05, stdev=184.96 00:09:51.489 clat (usec): min=1261, max=17093, avg=7534.61, stdev=1930.01 00:09:51.489 lat (usec): min=1271, max=17104, avg=7577.66, stdev=1945.54 00:09:51.489 clat percentiles (usec): 00:09:51.489 | 1.00th=[ 3392], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5932], 00:09:51.489 | 30.00th=[ 6980], 40.00th=[ 7373], 50.00th=[ 7701], 60.00th=[ 7963], 00:09:51.489 | 70.00th=[ 8291], 80.00th=[ 8717], 90.00th=[ 9372], 95.00th=[11207], 00:09:51.489 | 99.00th=[13173], 99.50th=[13829], 99.90th=[15139], 99.95th=[15533], 00:09:51.489 | 99.99th=[16450] 00:09:51.489 bw ( KiB/s): min=12240, max=39264, per=52.40%, avg=24376.09, stdev=8197.70, samples=11 00:09:51.489 iops : min= 3060, max= 9816, avg=6094.00, stdev=2049.42, samples=11 00:09:51.489 write: IOPS=6781, BW=26.5MiB/s (27.8MB/s)(144MiB/5423msec); 0 zone resets 00:09:51.489 slat (usec): min=16, max=2768, avg=54.93, stdev=129.58 00:09:51.489 clat (usec): min=1741, max=16896, avg=6422.92, stdev=1800.61 00:09:51.489 lat (usec): min=1767, max=16928, avg=6477.85, stdev=1813.55 00:09:51.489 clat percentiles (usec): 00:09:51.489 | 1.00th=[ 2802], 5.00th=[ 3392], 10.00th=[ 3818], 20.00th=[ 4490], 00:09:51.489 | 30.00th=[ 5276], 40.00th=[ 6456], 50.00th=[ 6915], 60.00th=[ 7242], 00:09:51.489 | 70.00th=[ 7504], 80.00th=[ 7767], 90.00th=[ 8225], 95.00th=[ 8717], 00:09:51.489 | 99.00th=[10814], 99.50th=[11731], 99.90th=[14353], 99.95th=[14877], 00:09:51.489 | 99.99th=[16057] 00:09:51.489 bw ( KiB/s): min=12920, max=38608, per=89.98%, avg=24409.73, stdev=7980.08, samples=11 00:09:51.489 iops : min= 3230, max= 9652, avg=6102.36, stdev=1995.00, samples=11 00:09:51.489 lat (msec) : 2=0.08%, 4=6.25%, 10=88.14%, 20=5.53% 00:09:51.489 cpu : usr=6.18%, sys=25.13%, ctx=5984, majf=0, minf=96 00:09:51.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:51.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:51.489 issued rwts: total=69844,36776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:51.489 00:09:51.489 Run status group 0 (all jobs): 00:09:51.489 READ: bw=45.4MiB/s (47.6MB/s), 45.4MiB/s-45.4MiB/s (47.6MB/s-47.6MB/s), io=273MiB (286MB), run=6006-6006msec 00:09:51.489 WRITE: bw=26.5MiB/s (27.8MB/s), 26.5MiB/s-26.5MiB/s (27.8MB/s-27.8MB/s), io=144MiB (151MB), run=5423-5423msec 00:09:51.489 00:09:51.489 Disk stats (read/write): 00:09:51.489 nvme0n1: ios=69155/35873, merge=0/0, ticks=494743/211643, in_queue=706386, util=98.66% 00:09:51.489 08:05:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:51.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:51.749 08:05:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:51.749 08:05:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1218 -- # local i=0 00:09:51.749 08:05:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:51.749 08:05:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.749 08:05:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:51.749 08:05:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.749 08:05:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1230 -- # return 0 00:09:51.749 08:05:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:52.007 08:05:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:52.007 08:05:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:52.007 08:05:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:52.007 08:05:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:52.007 08:05:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:52.007 08:05:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:52.007 08:05:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:52.007 08:05:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:52.007 08:05:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:52.007 08:05:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:52.007 rmmod nvme_tcp 00:09:52.265 rmmod nvme_fabrics 00:09:52.265 rmmod nvme_keyring 00:09:52.265 08:05:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:52.265 08:05:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:52.265 08:05:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:52.265 08:05:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 67183 ']' 00:09:52.265 08:05:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 67183 00:09:52.265 08:05:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@949 -- # '[' -z 67183 ']' 00:09:52.265 08:05:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # kill -0 67183 00:09:52.265 08:05:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # uname 00:09:52.265 08:05:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:52.265 08:05:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 67183 00:09:52.265 killing process with pid 67183 00:09:52.265 08:05:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:52.265 08:05:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:52.265 08:05:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # echo 'killing process with pid 67183' 00:09:52.265 08:05:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@968 -- # kill 67183 00:09:52.265 08:05:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@973 -- # wait 67183 00:09:52.524 08:05:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:52.524 08:05:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:52.524 08:05:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:52.524 08:05:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:52.524 08:05:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:52.524 08:05:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.524 08:05:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.524 08:05:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.524 08:05:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:52.524 ************************************ 00:09:52.524 END TEST nvmf_target_multipath 00:09:52.524 ************************************ 00:09:52.524 00:09:52.524 real 0m19.610s 00:09:52.525 user 1m13.227s 00:09:52.525 sys 0m10.301s 00:09:52.525 08:05:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:52.525 08:05:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:52.525 08:05:14 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:52.525 08:05:14 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:52.525 08:05:14 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:52.525 08:05:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:52.525 ************************************ 00:09:52.525 START TEST nvmf_zcopy 00:09:52.525 ************************************ 00:09:52.525 08:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:52.525 * Looking for test storage... 00:09:52.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:52.525 08:05:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:52.525 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.784 08:05:14 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:52.785 Cannot find device "nvmf_tgt_br" 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:52.785 Cannot find device "nvmf_tgt_br2" 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:52.785 Cannot find device "nvmf_tgt_br" 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:52.785 Cannot find device "nvmf_tgt_br2" 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:52.785 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:52.785 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:52.785 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:53.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:09:53.044 00:09:53.044 --- 10.0.0.2 ping statistics --- 00:09:53.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.044 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:53.044 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:53.044 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:09:53.044 00:09:53.044 --- 10.0.0.3 ping statistics --- 00:09:53.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.044 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:53.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:09:53.044 00:09:53.044 --- 10.0.0.1 ping statistics --- 00:09:53.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.044 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=67645 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 67645 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 67645 ']' 00:09:53.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:53.044 08:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.044 [2024-06-10 08:05:14.799225] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:09:53.044 [2024-06-10 08:05:14.799545] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.303 [2024-06-10 08:05:14.944804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.303 [2024-06-10 08:05:15.037742] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.303 [2024-06-10 08:05:15.038102] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.303 [2024-06-10 08:05:15.038122] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.303 [2024-06-10 08:05:15.038148] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.303 [2024-06-10 08:05:15.038155] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.303 [2024-06-10 08:05:15.038197] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.304 [2024-06-10 08:05:15.096514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:54.240 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:54.240 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:09:54.240 08:05:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:54.240 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:54.240 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.240 08:05:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.240 08:05:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:54.240 08:05:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:54.240 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:54.240 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.240 [2024-06-10 08:05:15.789804] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.240 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.241 [2024-06-10 08:05:15.805966] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.241 malloc0 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:54.241 { 00:09:54.241 "params": { 00:09:54.241 "name": "Nvme$subsystem", 00:09:54.241 "trtype": "$TEST_TRANSPORT", 00:09:54.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:54.241 "adrfam": "ipv4", 00:09:54.241 "trsvcid": "$NVMF_PORT", 00:09:54.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:54.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:54.241 "hdgst": ${hdgst:-false}, 00:09:54.241 "ddgst": ${ddgst:-false} 00:09:54.241 }, 00:09:54.241 "method": "bdev_nvme_attach_controller" 00:09:54.241 } 00:09:54.241 EOF 00:09:54.241 )") 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:54.241 08:05:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:54.241 "params": { 00:09:54.241 "name": "Nvme1", 00:09:54.241 "trtype": "tcp", 00:09:54.241 "traddr": "10.0.0.2", 00:09:54.241 "adrfam": "ipv4", 00:09:54.241 "trsvcid": "4420", 00:09:54.241 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:54.241 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:54.241 "hdgst": false, 00:09:54.241 "ddgst": false 00:09:54.241 }, 00:09:54.241 "method": "bdev_nvme_attach_controller" 00:09:54.241 }' 00:09:54.241 [2024-06-10 08:05:15.904075] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:09:54.241 [2024-06-10 08:05:15.904311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67678 ] 00:09:54.241 [2024-06-10 08:05:16.048088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.500 [2024-06-10 08:05:16.154673] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.500 [2024-06-10 08:05:16.236467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:54.500 Running I/O for 10 seconds... 00:10:06.713 00:10:06.713 Latency(us) 00:10:06.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.713 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:06.713 Verification LBA range: start 0x0 length 0x1000 00:10:06.713 Nvme1n1 : 10.01 6376.34 49.82 0.00 0.00 20009.89 1206.46 33125.47 00:10:06.713 =================================================================================================================== 00:10:06.713 Total : 6376.34 49.82 0.00 0.00 20009.89 1206.46 33125.47 00:10:06.713 08:05:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67800 00:10:06.713 08:05:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:06.713 08:05:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.713 08:05:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:06.713 08:05:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:06.713 08:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:06.713 08:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:06.713 08:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:06.713 08:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:06.713 { 00:10:06.713 "params": { 00:10:06.713 "name": "Nvme$subsystem", 00:10:06.713 "trtype": "$TEST_TRANSPORT", 00:10:06.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:06.713 "adrfam": "ipv4", 00:10:06.713 "trsvcid": "$NVMF_PORT", 00:10:06.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:06.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:06.713 "hdgst": ${hdgst:-false}, 00:10:06.713 "ddgst": ${ddgst:-false} 00:10:06.713 }, 00:10:06.713 "method": "bdev_nvme_attach_controller" 00:10:06.713 } 00:10:06.713 EOF 00:10:06.713 )") 00:10:06.713 08:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:06.713 [2024-06-10 08:05:26.623878] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.713 [2024-06-10 08:05:26.623950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.713 08:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:06.713 08:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:06.713 08:05:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:06.713 "params": { 00:10:06.713 "name": "Nvme1", 00:10:06.713 "trtype": "tcp", 00:10:06.713 "traddr": "10.0.0.2", 00:10:06.713 "adrfam": "ipv4", 00:10:06.713 "trsvcid": "4420", 00:10:06.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:06.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:06.713 "hdgst": false, 00:10:06.713 "ddgst": false 00:10:06.713 }, 00:10:06.713 "method": "bdev_nvme_attach_controller" 00:10:06.713 }' 00:10:06.713 [2024-06-10 08:05:26.639783] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.713 [2024-06-10 08:05:26.639978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.713 [2024-06-10 08:05:26.651784] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.713 [2024-06-10 08:05:26.651981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.659782] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.659985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.671809] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.671852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.682306] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:10:06.714 [2024-06-10 08:05:26.682447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67800 ] 00:10:06.714 [2024-06-10 08:05:26.683804] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.683850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.695833] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.696010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.707834] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.708019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.719851] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.720016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.731837] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.732007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.743841] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.744079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.755853] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.756046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.767857] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.768024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.779861] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.779886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.791863] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.791888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.803867] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.803892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.815874] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.815899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.825584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.714 [2024-06-10 08:05:26.827905] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.827931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.839920] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.839954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.851906] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.851935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.863907] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.863932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.875918] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.875946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.887917] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.887944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.899924] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.899949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.911932] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.911968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.917179] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.714 [2024-06-10 08:05:26.923927] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.923964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.935947] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.935975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.947961] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.947993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.959944] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.959971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.971946] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.971973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.983958] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.983986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.995961] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:26.995987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:26.996217] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:06.714 [2024-06-10 08:05:27.007968] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:27.007995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:27.019982] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:27.020041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:27.031970] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:27.031996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:27.044026] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:27.044085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:27.056013] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:27.056084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:27.068079] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:27.068111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:27.080090] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:27.080122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:27.092084] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:27.092114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:27.104080] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:27.104113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 Running I/O for 5 seconds... 00:10:06.714 [2024-06-10 08:05:27.116099] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:27.116134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:27.134652] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:27.134688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:27.150219] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:27.150266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:27.166922] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:27.166958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:27.183212] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:27.183246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:27.201996] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:27.202030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:27.215628] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:27.215661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:27.231468] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.714 [2024-06-10 08:05:27.231501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.714 [2024-06-10 08:05:27.248239] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.248274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.266889] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.266921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.280765] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.280836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.296731] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.296763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.314606] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.314639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.329547] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.329581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.347431] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.347471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.361749] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.361810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.377320] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.377363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.395390] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.395429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.409915] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.409949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.425341] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.425374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.444011] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.444072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.458693] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.458727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.475617] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.475651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.491124] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.491172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.502426] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.502458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.517918] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.517976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.527107] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.527140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.542498] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.542531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.558474] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.558510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.568239] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.568275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.582681] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.582713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.597981] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.598028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.607967] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.608014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.622839] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.622896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.632112] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.632148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.649316] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.649365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.664743] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.664775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.682488] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.682522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.698180] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.698212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.715860] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.715919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.732301] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.732352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.750592] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.750623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.764598] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.764630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.779704] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.779736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.790670] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.790702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.806914] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.806945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.824347] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.824383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.838824] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.838883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.855725] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.855757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.870357] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.870389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.886054] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.886086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.904616] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.904648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.918527] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.918559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.934664] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.934713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.951072] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.951103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.968645] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.968678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.715 [2024-06-10 08:05:27.984969] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.715 [2024-06-10 08:05:27.985002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.002583] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.002615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.017754] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.017826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.034137] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.034185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.050202] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.050234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.068654] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.068687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.082146] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.082192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.098208] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.098240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.114684] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.114716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.130561] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.130594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.139563] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.139596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.155577] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.155609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.170915] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.170952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.180129] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.180164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.197025] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.197060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.212560] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.212593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.227268] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.227300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.242356] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.242389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.251772] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.251863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.267343] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.267375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.283033] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.283068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.292551] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.292582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.308809] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.309032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.325660] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.325692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.342562] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.342595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.359726] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.359759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.375751] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.375834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.392444] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.392477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.409264] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.409297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.426379] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.426412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.445079] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.445115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.460587] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.460622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.477415] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.477452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.494177] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.494228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.510586] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.510653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.528706] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.528755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.542575] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.542625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.557779] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.557856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.716 [2024-06-10 08:05:28.567515] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.716 [2024-06-10 08:05:28.567547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.976 [2024-06-10 08:05:28.583138] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.976 [2024-06-10 08:05:28.583203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.976 [2024-06-10 08:05:28.600216] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.976 [2024-06-10 08:05:28.600253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.976 [2024-06-10 08:05:28.616511] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.976 [2024-06-10 08:05:28.616546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.976 [2024-06-10 08:05:28.633272] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.976 [2024-06-10 08:05:28.633321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.976 [2024-06-10 08:05:28.651603] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.976 [2024-06-10 08:05:28.651636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.976 [2024-06-10 08:05:28.665645] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.976 [2024-06-10 08:05:28.665678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.976 [2024-06-10 08:05:28.683097] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.976 [2024-06-10 08:05:28.683130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.976 [2024-06-10 08:05:28.698596] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.976 [2024-06-10 08:05:28.698629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.976 [2024-06-10 08:05:28.708244] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.976 [2024-06-10 08:05:28.708281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.976 [2024-06-10 08:05:28.723515] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.976 [2024-06-10 08:05:28.723550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.976 [2024-06-10 08:05:28.739225] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.976 [2024-06-10 08:05:28.739258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.976 [2024-06-10 08:05:28.756625] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.976 [2024-06-10 08:05:28.756658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.976 [2024-06-10 08:05:28.773236] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.976 [2024-06-10 08:05:28.773269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.976 [2024-06-10 08:05:28.788759] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.977 [2024-06-10 08:05:28.788835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.977 [2024-06-10 08:05:28.798328] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.977 [2024-06-10 08:05:28.798362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.977 [2024-06-10 08:05:28.813303] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.977 [2024-06-10 08:05:28.813336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.977 [2024-06-10 08:05:28.822889] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.977 [2024-06-10 08:05:28.822922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.977 [2024-06-10 08:05:28.837996] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.977 [2024-06-10 08:05:28.838027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.278 [2024-06-10 08:05:28.853516] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.278 [2024-06-10 08:05:28.853549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.278 [2024-06-10 08:05:28.870309] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.278 [2024-06-10 08:05:28.870342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.278 [2024-06-10 08:05:28.886688] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.278 [2024-06-10 08:05:28.886723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.278 [2024-06-10 08:05:28.903432] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.278 [2024-06-10 08:05:28.903465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.278 [2024-06-10 08:05:28.920216] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.278 [2024-06-10 08:05:28.920253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.278 [2024-06-10 08:05:28.936835] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.278 [2024-06-10 08:05:28.936936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.278 [2024-06-10 08:05:28.953428] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.278 [2024-06-10 08:05:28.953460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.278 [2024-06-10 08:05:28.969732] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.278 [2024-06-10 08:05:28.969765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.278 [2024-06-10 08:05:28.986778] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.278 [2024-06-10 08:05:28.986854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.278 [2024-06-10 08:05:29.002860] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.278 [2024-06-10 08:05:29.002892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.278 [2024-06-10 08:05:29.020985] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.278 [2024-06-10 08:05:29.021031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.278 [2024-06-10 08:05:29.035068] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.278 [2024-06-10 08:05:29.035101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.278 [2024-06-10 08:05:29.051429] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.278 [2024-06-10 08:05:29.051462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.278 [2024-06-10 08:05:29.067577] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.278 [2024-06-10 08:05:29.067610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.278 [2024-06-10 08:05:29.076644] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.279 [2024-06-10 08:05:29.076676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.279 [2024-06-10 08:05:29.092306] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.279 [2024-06-10 08:05:29.092355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.279 [2024-06-10 08:05:29.107761] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.279 [2024-06-10 08:05:29.107838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.279 [2024-06-10 08:05:29.125927] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.279 [2024-06-10 08:05:29.125963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.279 [2024-06-10 08:05:29.140092] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.279 [2024-06-10 08:05:29.140159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.539 [2024-06-10 08:05:29.155408] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.539 [2024-06-10 08:05:29.155440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.539 [2024-06-10 08:05:29.172203] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.539 [2024-06-10 08:05:29.172239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.539 [2024-06-10 08:05:29.186615] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.539 [2024-06-10 08:05:29.186648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.539 [2024-06-10 08:05:29.203200] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.539 [2024-06-10 08:05:29.203235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.539 [2024-06-10 08:05:29.219395] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.539 [2024-06-10 08:05:29.219428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.539 [2024-06-10 08:05:29.236719] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.539 [2024-06-10 08:05:29.236753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.539 [2024-06-10 08:05:29.253693] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.539 [2024-06-10 08:05:29.253726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.539 [2024-06-10 08:05:29.269781] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.539 [2024-06-10 08:05:29.269858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.539 [2024-06-10 08:05:29.288101] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.539 [2024-06-10 08:05:29.288136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.539 [2024-06-10 08:05:29.303890] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.539 [2024-06-10 08:05:29.303951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.539 [2024-06-10 08:05:29.321648] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.539 [2024-06-10 08:05:29.321679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.539 [2024-06-10 08:05:29.337308] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.539 [2024-06-10 08:05:29.337341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.539 [2024-06-10 08:05:29.354906] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.539 [2024-06-10 08:05:29.354940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.539 [2024-06-10 08:05:29.372348] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.539 [2024-06-10 08:05:29.372396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.539 [2024-06-10 08:05:29.388349] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.539 [2024-06-10 08:05:29.388428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.539 [2024-06-10 08:05:29.398235] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.539 [2024-06-10 08:05:29.398267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.798 [2024-06-10 08:05:29.414473] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.798 [2024-06-10 08:05:29.414505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.798 [2024-06-10 08:05:29.423889] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.798 [2024-06-10 08:05:29.423938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.798 [2024-06-10 08:05:29.440620] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.798 [2024-06-10 08:05:29.440655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.798 [2024-06-10 08:05:29.458879] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.798 [2024-06-10 08:05:29.458939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.798 [2024-06-10 08:05:29.474726] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.798 [2024-06-10 08:05:29.474758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.798 [2024-06-10 08:05:29.490762] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.798 [2024-06-10 08:05:29.490820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.798 [2024-06-10 08:05:29.500467] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.798 [2024-06-10 08:05:29.500500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.798 [2024-06-10 08:05:29.516322] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.798 [2024-06-10 08:05:29.516355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.798 [2024-06-10 08:05:29.532584] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.798 [2024-06-10 08:05:29.532617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.798 [2024-06-10 08:05:29.548404] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.798 [2024-06-10 08:05:29.548454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.798 [2024-06-10 08:05:29.566231] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.798 [2024-06-10 08:05:29.566296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.798 [2024-06-10 08:05:29.581141] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.798 [2024-06-10 08:05:29.581200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.798 [2024-06-10 08:05:29.596338] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.798 [2024-06-10 08:05:29.596367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.798 [2024-06-10 08:05:29.605985] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.798 [2024-06-10 08:05:29.606032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.798 [2024-06-10 08:05:29.622476] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.798 [2024-06-10 08:05:29.622522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.798 [2024-06-10 08:05:29.639746] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.798 [2024-06-10 08:05:29.639792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.799 [2024-06-10 08:05:29.654621] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.799 [2024-06-10 08:05:29.654667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.057 [2024-06-10 08:05:29.669489] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.057 [2024-06-10 08:05:29.669518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.057 [2024-06-10 08:05:29.685678] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.057 [2024-06-10 08:05:29.685724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.057 [2024-06-10 08:05:29.701959] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.057 [2024-06-10 08:05:29.702003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.057 [2024-06-10 08:05:29.721381] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.057 [2024-06-10 08:05:29.721425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.057 [2024-06-10 08:05:29.735514] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.057 [2024-06-10 08:05:29.735559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.057 [2024-06-10 08:05:29.752205] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.057 [2024-06-10 08:05:29.752237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.057 [2024-06-10 08:05:29.768455] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.057 [2024-06-10 08:05:29.768484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.057 [2024-06-10 08:05:29.785167] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.057 [2024-06-10 08:05:29.785239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.057 [2024-06-10 08:05:29.803191] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.057 [2024-06-10 08:05:29.803235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.057 [2024-06-10 08:05:29.819580] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.057 [2024-06-10 08:05:29.819623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.057 [2024-06-10 08:05:29.836721] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.058 [2024-06-10 08:05:29.836765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.058 [2024-06-10 08:05:29.852513] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.058 [2024-06-10 08:05:29.852573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.058 [2024-06-10 08:05:29.870435] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.058 [2024-06-10 08:05:29.870480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.058 [2024-06-10 08:05:29.885708] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.058 [2024-06-10 08:05:29.885752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.058 [2024-06-10 08:05:29.900138] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.058 [2024-06-10 08:05:29.900168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.058 [2024-06-10 08:05:29.916018] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.058 [2024-06-10 08:05:29.916072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.316 [2024-06-10 08:05:29.933572] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.316 [2024-06-10 08:05:29.933618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.316 [2024-06-10 08:05:29.948413] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.316 [2024-06-10 08:05:29.948457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.316 [2024-06-10 08:05:29.963465] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.316 [2024-06-10 08:05:29.963511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.316 [2024-06-10 08:05:29.978598] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.316 [2024-06-10 08:05:29.978642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.316 [2024-06-10 08:05:29.987945] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.316 [2024-06-10 08:05:29.987989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.316 [2024-06-10 08:05:30.003315] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.316 [2024-06-10 08:05:30.003359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.316 [2024-06-10 08:05:30.015929] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.316 [2024-06-10 08:05:30.015973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.316 [2024-06-10 08:05:30.031641] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.316 [2024-06-10 08:05:30.031685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.316 [2024-06-10 08:05:30.048438] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.316 [2024-06-10 08:05:30.048483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.316 [2024-06-10 08:05:30.065595] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.316 [2024-06-10 08:05:30.065640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.316 [2024-06-10 08:05:30.081201] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.316 [2024-06-10 08:05:30.081246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.316 [2024-06-10 08:05:30.098898] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.316 [2024-06-10 08:05:30.098942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.316 [2024-06-10 08:05:30.115458] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.316 [2024-06-10 08:05:30.115502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.316 [2024-06-10 08:05:30.131597] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.316 [2024-06-10 08:05:30.131643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.316 [2024-06-10 08:05:30.148313] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.316 [2024-06-10 08:05:30.148358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.316 [2024-06-10 08:05:30.164271] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.316 [2024-06-10 08:05:30.164317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.316 [2024-06-10 08:05:30.181756] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.316 [2024-06-10 08:05:30.181814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.576 [2024-06-10 08:05:30.196110] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.576 [2024-06-10 08:05:30.196141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.576 [2024-06-10 08:05:30.213099] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.576 [2024-06-10 08:05:30.213161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.576 [2024-06-10 08:05:30.227823] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.576 [2024-06-10 08:05:30.227871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.576 [2024-06-10 08:05:30.243738] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.576 [2024-06-10 08:05:30.243783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.576 [2024-06-10 08:05:30.262119] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.576 [2024-06-10 08:05:30.262164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.576 [2024-06-10 08:05:30.277382] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.576 [2024-06-10 08:05:30.277443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.576 [2024-06-10 08:05:30.293862] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.576 [2024-06-10 08:05:30.293915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.576 [2024-06-10 08:05:30.312004] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.576 [2024-06-10 08:05:30.312075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.576 [2024-06-10 08:05:30.326418] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.576 [2024-06-10 08:05:30.326462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.576 [2024-06-10 08:05:30.343038] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.576 [2024-06-10 08:05:30.343082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.576 [2024-06-10 08:05:30.359108] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.576 [2024-06-10 08:05:30.359151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.576 [2024-06-10 08:05:30.375888] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.576 [2024-06-10 08:05:30.375944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.576 [2024-06-10 08:05:30.392355] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.576 [2024-06-10 08:05:30.392387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.576 [2024-06-10 08:05:30.409549] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.576 [2024-06-10 08:05:30.409581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.576 [2024-06-10 08:05:30.426436] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.576 [2024-06-10 08:05:30.426481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.836 [2024-06-10 08:05:30.444675] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.836 [2024-06-10 08:05:30.444722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.836 [2024-06-10 08:05:30.459440] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.836 [2024-06-10 08:05:30.459484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.836 [2024-06-10 08:05:30.475019] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.836 [2024-06-10 08:05:30.475064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.836 [2024-06-10 08:05:30.494522] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.836 [2024-06-10 08:05:30.494566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.836 [2024-06-10 08:05:30.508716] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.836 [2024-06-10 08:05:30.508760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.836 [2024-06-10 08:05:30.523969] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.836 [2024-06-10 08:05:30.524013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.836 [2024-06-10 08:05:30.542083] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.836 [2024-06-10 08:05:30.542129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.836 [2024-06-10 08:05:30.557113] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.836 [2024-06-10 08:05:30.557156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.836 [2024-06-10 08:05:30.573529] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.836 [2024-06-10 08:05:30.573573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.836 [2024-06-10 08:05:30.589381] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.836 [2024-06-10 08:05:30.589425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.836 [2024-06-10 08:05:30.600553] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.836 [2024-06-10 08:05:30.600596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.836 [2024-06-10 08:05:30.617260] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.836 [2024-06-10 08:05:30.617304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.836 [2024-06-10 08:05:30.634017] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.836 [2024-06-10 08:05:30.634062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.836 [2024-06-10 08:05:30.650320] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.836 [2024-06-10 08:05:30.650366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.836 [2024-06-10 08:05:30.667915] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.836 [2024-06-10 08:05:30.667961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.836 [2024-06-10 08:05:30.683383] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.836 [2024-06-10 08:05:30.683413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.836 [2024-06-10 08:05:30.693283] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.836 [2024-06-10 08:05:30.693328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-06-10 08:05:30.708585] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-06-10 08:05:30.708617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-06-10 08:05:30.723535] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-06-10 08:05:30.723595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-06-10 08:05:30.738597] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-06-10 08:05:30.738642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-06-10 08:05:30.755166] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-06-10 08:05:30.755228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-06-10 08:05:30.770654] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-06-10 08:05:30.770698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-06-10 08:05:30.787535] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-06-10 08:05:30.787599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-06-10 08:05:30.803324] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-06-10 08:05:30.803368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-06-10 08:05:30.821054] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-06-10 08:05:30.821098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-06-10 08:05:30.835744] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-06-10 08:05:30.835788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-06-10 08:05:30.851596] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-06-10 08:05:30.851639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-06-10 08:05:30.869427] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-06-10 08:05:30.869470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-06-10 08:05:30.885181] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-06-10 08:05:30.885241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-06-10 08:05:30.902259] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-06-10 08:05:30.902304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-06-10 08:05:30.917714] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-06-10 08:05:30.917758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-06-10 08:05:30.934646] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-06-10 08:05:30.934690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-06-10 08:05:30.950537] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-06-10 08:05:30.950581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.356 [2024-06-10 08:05:30.967885] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.356 [2024-06-10 08:05:30.967959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.356 [2024-06-10 08:05:30.983485] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.356 [2024-06-10 08:05:30.983536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.356 [2024-06-10 08:05:31.000997] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.356 [2024-06-10 08:05:31.001041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.356 [2024-06-10 08:05:31.016085] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.356 [2024-06-10 08:05:31.016116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.356 [2024-06-10 08:05:31.032485] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.356 [2024-06-10 08:05:31.032529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.356 [2024-06-10 08:05:31.050364] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.356 [2024-06-10 08:05:31.050396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.356 [2024-06-10 08:05:31.066126] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.356 [2024-06-10 08:05:31.066170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.356 [2024-06-10 08:05:31.085215] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.356 [2024-06-10 08:05:31.085258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.356 [2024-06-10 08:05:31.099146] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.356 [2024-06-10 08:05:31.099204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.356 [2024-06-10 08:05:31.115423] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.356 [2024-06-10 08:05:31.115466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.356 [2024-06-10 08:05:31.131041] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.356 [2024-06-10 08:05:31.131084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.356 [2024-06-10 08:05:31.140160] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.356 [2024-06-10 08:05:31.140205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.356 [2024-06-10 08:05:31.155634] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.356 [2024-06-10 08:05:31.155678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.356 [2024-06-10 08:05:31.171009] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.356 [2024-06-10 08:05:31.171053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.356 [2024-06-10 08:05:31.188730] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.356 [2024-06-10 08:05:31.188773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.356 [2024-06-10 08:05:31.204000] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.356 [2024-06-10 08:05:31.204068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.356 [2024-06-10 08:05:31.219887] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.356 [2024-06-10 08:05:31.219931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.615 [2024-06-10 08:05:31.237566] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.615 [2024-06-10 08:05:31.237612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.615 [2024-06-10 08:05:31.251835] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.615 [2024-06-10 08:05:31.251876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.615 [2024-06-10 08:05:31.268206] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.615 [2024-06-10 08:05:31.268239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.615 [2024-06-10 08:05:31.284290] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.615 [2024-06-10 08:05:31.284351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.615 [2024-06-10 08:05:31.293612] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.615 [2024-06-10 08:05:31.293655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.615 [2024-06-10 08:05:31.309105] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.615 [2024-06-10 08:05:31.309149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.615 [2024-06-10 08:05:31.326389] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.615 [2024-06-10 08:05:31.326433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.615 [2024-06-10 08:05:31.342670] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.615 [2024-06-10 08:05:31.342729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.615 [2024-06-10 08:05:31.358147] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.616 [2024-06-10 08:05:31.358207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.616 [2024-06-10 08:05:31.376478] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.616 [2024-06-10 08:05:31.376508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.616 [2024-06-10 08:05:31.390270] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.616 [2024-06-10 08:05:31.390314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.616 [2024-06-10 08:05:31.407259] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.616 [2024-06-10 08:05:31.407302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.616 [2024-06-10 08:05:31.421519] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.616 [2024-06-10 08:05:31.421562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.616 [2024-06-10 08:05:31.438361] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.616 [2024-06-10 08:05:31.438388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.616 [2024-06-10 08:05:31.453706] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.616 [2024-06-10 08:05:31.453749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.616 [2024-06-10 08:05:31.465006] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.616 [2024-06-10 08:05:31.465050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.874 [2024-06-10 08:05:31.482244] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.874 [2024-06-10 08:05:31.482289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.874 [2024-06-10 08:05:31.496511] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.874 [2024-06-10 08:05:31.496570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.874 [2024-06-10 08:05:31.513112] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.874 [2024-06-10 08:05:31.513157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.874 [2024-06-10 08:05:31.529301] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.874 [2024-06-10 08:05:31.529331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.874 [2024-06-10 08:05:31.546361] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.874 [2024-06-10 08:05:31.546407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.874 [2024-06-10 08:05:31.563343] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.874 [2024-06-10 08:05:31.563389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.874 [2024-06-10 08:05:31.577715] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.874 [2024-06-10 08:05:31.577759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.874 [2024-06-10 08:05:31.593984] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.874 [2024-06-10 08:05:31.594028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.874 [2024-06-10 08:05:31.609671] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.874 [2024-06-10 08:05:31.609714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.874 [2024-06-10 08:05:31.627296] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.874 [2024-06-10 08:05:31.627341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.874 [2024-06-10 08:05:31.643255] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.874 [2024-06-10 08:05:31.643298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.874 [2024-06-10 08:05:31.659027] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.874 [2024-06-10 08:05:31.659072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.874 [2024-06-10 08:05:31.675412] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.874 [2024-06-10 08:05:31.675444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.874 [2024-06-10 08:05:31.691270] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.874 [2024-06-10 08:05:31.691313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.874 [2024-06-10 08:05:31.708232] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.874 [2024-06-10 08:05:31.708264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.874 [2024-06-10 08:05:31.723981] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.874 [2024-06-10 08:05:31.724025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.874 [2024-06-10 08:05:31.734101] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.874 [2024-06-10 08:05:31.734131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.133 [2024-06-10 08:05:31.749873] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.133 [2024-06-10 08:05:31.749918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.133 [2024-06-10 08:05:31.765826] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.133 [2024-06-10 08:05:31.765883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.133 [2024-06-10 08:05:31.784207] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.133 [2024-06-10 08:05:31.784238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.133 [2024-06-10 08:05:31.799069] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.133 [2024-06-10 08:05:31.799105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.133 [2024-06-10 08:05:31.808726] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.133 [2024-06-10 08:05:31.808770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.133 [2024-06-10 08:05:31.824552] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.133 [2024-06-10 08:05:31.824596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.133 [2024-06-10 08:05:31.840781] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.133 [2024-06-10 08:05:31.840873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.133 [2024-06-10 08:05:31.857038] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.133 [2024-06-10 08:05:31.857079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.133 [2024-06-10 08:05:31.874768] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.133 [2024-06-10 08:05:31.874841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.133 [2024-06-10 08:05:31.889400] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.133 [2024-06-10 08:05:31.889444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.133 [2024-06-10 08:05:31.906535] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.133 [2024-06-10 08:05:31.906579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.133 [2024-06-10 08:05:31.921851] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.133 [2024-06-10 08:05:31.921895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.133 [2024-06-10 08:05:31.937785] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.133 [2024-06-10 08:05:31.937854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.133 [2024-06-10 08:05:31.956675] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.133 [2024-06-10 08:05:31.956719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.133 [2024-06-10 08:05:31.970309] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.133 [2024-06-10 08:05:31.970353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.133 [2024-06-10 08:05:31.986535] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.133 [2024-06-10 08:05:31.986579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.404 [2024-06-10 08:05:32.001372] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.404 [2024-06-10 08:05:32.001434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.404 [2024-06-10 08:05:32.017223] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.404 [2024-06-10 08:05:32.017251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.404 [2024-06-10 08:05:32.035410] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.404 [2024-06-10 08:05:32.035455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.404 [2024-06-10 08:05:32.051232] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.404 [2024-06-10 08:05:32.051276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.404 [2024-06-10 08:05:32.069191] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.404 [2024-06-10 08:05:32.069235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.404 [2024-06-10 08:05:32.084121] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.404 [2024-06-10 08:05:32.084152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.404 [2024-06-10 08:05:32.093092] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.404 [2024-06-10 08:05:32.093136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.404 [2024-06-10 08:05:32.109886] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.404 [2024-06-10 08:05:32.109930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.404 00:10:10.404 Latency(us) 00:10:10.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.404 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:10.404 Nvme1n1 : 5.01 12114.17 94.64 0.00 0.00 10554.21 4349.21 21924.77 00:10:10.404 =================================================================================================================== 00:10:10.404 Total : 12114.17 94.64 0.00 0.00 10554.21 4349.21 21924.77 00:10:10.404 [2024-06-10 08:05:32.122121] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.404 [2024-06-10 08:05:32.122164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.404 [2024-06-10 08:05:32.134130] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.404 [2024-06-10 08:05:32.134172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.404 [2024-06-10 08:05:32.146120] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.404 [2024-06-10 08:05:32.146162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.404 [2024-06-10 08:05:32.158131] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.404 [2024-06-10 08:05:32.158194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.404 [2024-06-10 08:05:32.170133] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.404 [2024-06-10 08:05:32.170195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.404 [2024-06-10 08:05:32.182136] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.404 [2024-06-10 08:05:32.182199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.404 [2024-06-10 08:05:32.194137] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.404 [2024-06-10 08:05:32.194200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.404 [2024-06-10 08:05:32.206186] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.404 [2024-06-10 08:05:32.206231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.404 [2024-06-10 08:05:32.218137] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.404 [2024-06-10 08:05:32.218196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.404 [2024-06-10 08:05:32.230142] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.404 [2024-06-10 08:05:32.230201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.404 [2024-06-10 08:05:32.242165] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.404 [2024-06-10 08:05:32.242210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.404 [2024-06-10 08:05:32.254176] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.404 [2024-06-10 08:05:32.254251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.404 [2024-06-10 08:05:32.266180] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.404 [2024-06-10 08:05:32.266208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.664 [2024-06-10 08:05:32.278203] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.664 [2024-06-10 08:05:32.278261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.664 [2024-06-10 08:05:32.290161] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.664 [2024-06-10 08:05:32.290229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.664 [2024-06-10 08:05:32.302164] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.664 [2024-06-10 08:05:32.302219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.664 [2024-06-10 08:05:32.314201] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.664 [2024-06-10 08:05:32.314249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.664 [2024-06-10 08:05:32.326185] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.664 [2024-06-10 08:05:32.326225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.664 [2024-06-10 08:05:32.338184] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.664 [2024-06-10 08:05:32.338223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.664 [2024-06-10 08:05:32.350199] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.664 [2024-06-10 08:05:32.350277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.664 [2024-06-10 08:05:32.362169] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.664 [2024-06-10 08:05:32.362207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.664 [2024-06-10 08:05:32.374242] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.664 [2024-06-10 08:05:32.374262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.664 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67800) - No such process 00:10:10.664 08:05:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67800 00:10:10.664 08:05:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.664 08:05:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:10.664 08:05:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:10.664 08:05:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:10.664 08:05:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:10.664 08:05:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:10.664 08:05:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:10.664 delay0 00:10:10.664 08:05:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:10.664 08:05:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:10.664 08:05:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:10.664 08:05:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:10.664 08:05:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:10.664 08:05:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:10.923 [2024-06-10 08:05:32.561134] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:19.046 Initializing NVMe Controllers 00:10:19.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:19.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:19.046 Initialization complete. Launching workers. 00:10:19.046 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 269, failed: 19912 00:10:19.046 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 20092, failed to submit 89 00:10:19.046 success 19978, unsuccess 114, failed 0 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:19.046 rmmod nvme_tcp 00:10:19.046 rmmod nvme_fabrics 00:10:19.046 rmmod nvme_keyring 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 67645 ']' 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 67645 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 67645 ']' 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 67645 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 67645 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:10:19.046 killing process with pid 67645 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 67645' 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 67645 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 67645 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:19.046 00:10:19.046 real 0m25.662s 00:10:19.046 user 0m41.717s 00:10:19.046 sys 0m7.436s 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:19.046 08:05:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.046 ************************************ 00:10:19.046 END TEST nvmf_zcopy 00:10:19.046 ************************************ 00:10:19.046 08:05:40 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:19.046 08:05:40 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:19.047 08:05:40 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:19.047 08:05:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:19.047 ************************************ 00:10:19.047 START TEST nvmf_nmic 00:10:19.047 ************************************ 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:19.047 * Looking for test storage... 00:10:19.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:19.047 Cannot find device "nvmf_tgt_br" 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:19.047 Cannot find device "nvmf_tgt_br2" 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:19.047 Cannot find device "nvmf_tgt_br" 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:19.047 Cannot find device "nvmf_tgt_br2" 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:19.047 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:19.047 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:19.047 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:19.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:10:19.048 00:10:19.048 --- 10.0.0.2 ping statistics --- 00:10:19.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.048 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:19.048 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:19.048 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:10:19.048 00:10:19.048 --- 10.0.0.3 ping statistics --- 00:10:19.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.048 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:19.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:19.048 00:10:19.048 --- 10.0.0.1 ping statistics --- 00:10:19.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.048 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=68134 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 68134 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 68134 ']' 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:19.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:19.048 08:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.048 [2024-06-10 08:05:40.569585] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:10:19.048 [2024-06-10 08:05:40.569684] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.048 [2024-06-10 08:05:40.711403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:19.048 [2024-06-10 08:05:40.813040] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.048 [2024-06-10 08:05:40.813108] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.048 [2024-06-10 08:05:40.813134] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.048 [2024-06-10 08:05:40.813141] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.048 [2024-06-10 08:05:40.813147] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.048 [2024-06-10 08:05:40.813299] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.048 [2024-06-10 08:05:40.813642] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:19.048 [2024-06-10 08:05:40.814101] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:19.048 [2024-06-10 08:05:40.814185] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.048 [2024-06-10 08:05:40.872552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:19.616 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:19.616 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:10:19.616 08:05:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:19.616 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:19.616 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.875 [2024-06-10 08:05:41.525297] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.875 Malloc0 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.875 [2024-06-10 08:05:41.599063] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:19.875 test case1: single bdev can't be used in multiple subsystems 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.875 [2024-06-10 08:05:41.622885] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:19.875 [2024-06-10 08:05:41.622924] subsystem.c:2066:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:19.875 [2024-06-10 08:05:41.622936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.875 request: 00:10:19.875 { 00:10:19.875 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:19.875 "namespace": { 00:10:19.875 "bdev_name": "Malloc0", 00:10:19.875 "no_auto_visible": false 00:10:19.875 }, 00:10:19.875 "method": "nvmf_subsystem_add_ns", 00:10:19.875 "req_id": 1 00:10:19.875 } 00:10:19.875 Got JSON-RPC error response 00:10:19.875 response: 00:10:19.875 { 00:10:19.875 "code": -32602, 00:10:19.875 "message": "Invalid parameters" 00:10:19.875 } 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:19.875 Adding namespace failed - expected result. 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:19.875 test case2: host connect to nvmf target in multiple paths 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.875 [2024-06-10 08:05:41.635014] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:19.875 08:05:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid=0b063e5e-64f6-4b4f-b15f-bd51b74609ab -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:20.134 08:05:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid=0b063e5e-64f6-4b4f-b15f-bd51b74609ab -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:20.134 08:05:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:20.134 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:10:20.134 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:20.134 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:20.134 08:05:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:10:22.049 08:05:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:22.049 08:05:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:22.049 08:05:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:22.049 08:05:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:22.049 08:05:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:22.049 08:05:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:10:22.049 08:05:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:22.308 [global] 00:10:22.308 thread=1 00:10:22.308 invalidate=1 00:10:22.308 rw=write 00:10:22.308 time_based=1 00:10:22.308 runtime=1 00:10:22.308 ioengine=libaio 00:10:22.308 direct=1 00:10:22.308 bs=4096 00:10:22.308 iodepth=1 00:10:22.308 norandommap=0 00:10:22.308 numjobs=1 00:10:22.308 00:10:22.308 verify_dump=1 00:10:22.308 verify_backlog=512 00:10:22.308 verify_state_save=0 00:10:22.308 do_verify=1 00:10:22.308 verify=crc32c-intel 00:10:22.308 [job0] 00:10:22.308 filename=/dev/nvme0n1 00:10:22.308 Could not set queue depth (nvme0n1) 00:10:22.308 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.308 fio-3.35 00:10:22.308 Starting 1 thread 00:10:23.689 00:10:23.689 job0: (groupid=0, jobs=1): err= 0: pid=68221: Mon Jun 10 08:05:45 2024 00:10:23.689 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:23.689 slat (usec): min=11, max=116, avg=15.99, stdev= 6.31 00:10:23.689 clat (usec): min=120, max=908, avg=174.07, stdev=33.05 00:10:23.689 lat (usec): min=133, max=922, avg=190.06, stdev=33.90 00:10:23.689 clat percentiles (usec): 00:10:23.689 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:10:23.689 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 176], 00:10:23.689 | 70.00th=[ 184], 80.00th=[ 194], 90.00th=[ 210], 95.00th=[ 229], 00:10:23.689 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 420], 99.95th=[ 693], 00:10:23.689 | 99.99th=[ 906] 00:10:23.689 write: IOPS=3080, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:23.689 slat (usec): min=14, max=137, avg=23.01, stdev= 7.51 00:10:23.689 clat (usec): min=75, max=307, avg=108.07, stdev=21.81 00:10:23.689 lat (usec): min=93, max=342, avg=131.08, stdev=23.69 00:10:23.689 clat percentiles (usec): 00:10:23.689 | 1.00th=[ 79], 5.00th=[ 83], 10.00th=[ 86], 20.00th=[ 91], 00:10:23.689 | 30.00th=[ 94], 40.00th=[ 98], 50.00th=[ 104], 60.00th=[ 110], 00:10:23.689 | 70.00th=[ 116], 80.00th=[ 124], 90.00th=[ 137], 95.00th=[ 147], 00:10:23.689 | 99.00th=[ 178], 99.50th=[ 194], 99.90th=[ 260], 99.95th=[ 306], 00:10:23.689 | 99.99th=[ 310] 00:10:23.689 bw ( KiB/s): min=13056, max=13056, per=100.00%, avg=13056.00, stdev= 0.00, samples=1 00:10:23.689 iops : min= 3264, max= 3264, avg=3264.00, stdev= 0.00, samples=1 00:10:23.689 lat (usec) : 100=22.12%, 250=76.80%, 500=1.04%, 750=0.02%, 1000=0.02% 00:10:23.689 cpu : usr=1.90%, sys=9.80%, ctx=6164, majf=0, minf=2 00:10:23.689 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.689 issued rwts: total=3072,3084,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.689 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.689 00:10:23.689 Run status group 0 (all jobs): 00:10:23.690 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:23.690 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:23.690 00:10:23.690 Disk stats (read/write): 00:10:23.690 nvme0n1: ios=2676/3072, merge=0/0, ticks=512/404, in_queue=916, util=91.38% 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:23.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:23.690 rmmod nvme_tcp 00:10:23.690 rmmod nvme_fabrics 00:10:23.690 rmmod nvme_keyring 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 68134 ']' 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 68134 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 68134 ']' 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 68134 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 68134 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:23.690 killing process with pid 68134 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 68134' 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 68134 00:10:23.690 08:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 68134 00:10:23.949 08:05:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:23.949 08:05:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:23.949 08:05:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:23.949 08:05:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:23.949 08:05:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:23.949 08:05:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.949 08:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:23.949 08:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.949 08:05:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:23.949 00:10:23.949 real 0m5.786s 00:10:23.949 user 0m18.182s 00:10:23.949 sys 0m2.290s 00:10:23.949 08:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:23.949 08:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.949 ************************************ 00:10:23.949 END TEST nvmf_nmic 00:10:23.949 ************************************ 00:10:24.210 08:05:45 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:24.210 08:05:45 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:24.210 08:05:45 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:24.210 08:05:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:24.210 ************************************ 00:10:24.210 START TEST nvmf_fio_target 00:10:24.210 ************************************ 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:24.210 * Looking for test storage... 00:10:24.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:24.210 08:05:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:24.210 Cannot find device "nvmf_tgt_br" 00:10:24.210 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:10:24.210 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:24.210 Cannot find device "nvmf_tgt_br2" 00:10:24.210 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:10:24.210 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:24.210 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:24.210 Cannot find device "nvmf_tgt_br" 00:10:24.210 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:10:24.210 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:24.210 Cannot find device "nvmf_tgt_br2" 00:10:24.210 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:10:24.210 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:24.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:24.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:24.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:24.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:10:24.470 00:10:24.470 --- 10.0.0.2 ping statistics --- 00:10:24.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.470 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:24.470 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:24.470 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:10:24.470 00:10:24.470 --- 10.0.0.3 ping statistics --- 00:10:24.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.470 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:24.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:24.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:10:24.470 00:10:24.470 --- 10.0.0.1 ping statistics --- 00:10:24.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.470 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:24.470 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:24.729 08:05:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:24.729 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:24.729 08:05:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:24.729 08:05:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.729 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=68400 00:10:24.729 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:24.729 08:05:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 68400 00:10:24.729 08:05:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 68400 ']' 00:10:24.729 08:05:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.729 08:05:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:24.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.729 08:05:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.729 08:05:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:24.729 08:05:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.729 [2024-06-10 08:05:46.400907] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:10:24.729 [2024-06-10 08:05:46.401009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.729 [2024-06-10 08:05:46.537858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:24.988 [2024-06-10 08:05:46.655367] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:24.988 [2024-06-10 08:05:46.655741] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:24.988 [2024-06-10 08:05:46.655884] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:24.988 [2024-06-10 08:05:46.655982] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:24.988 [2024-06-10 08:05:46.656102] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:24.988 [2024-06-10 08:05:46.656387] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.988 [2024-06-10 08:05:46.656495] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.988 [2024-06-10 08:05:46.657146] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:24.988 [2024-06-10 08:05:46.657165] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.988 [2024-06-10 08:05:46.720161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:25.555 08:05:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:25.555 08:05:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:10:25.555 08:05:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:25.555 08:05:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:25.555 08:05:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.555 08:05:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.555 08:05:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:25.814 [2024-06-10 08:05:47.656688] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.136 08:05:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:26.136 08:05:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:26.136 08:05:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:26.393 08:05:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:26.393 08:05:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:26.960 08:05:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:26.960 08:05:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:26.960 08:05:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:26.960 08:05:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:27.219 08:05:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:27.477 08:05:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:27.477 08:05:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:27.735 08:05:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:27.735 08:05:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:27.994 08:05:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:27.994 08:05:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:28.252 08:05:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:28.510 08:05:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:28.510 08:05:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:28.768 08:05:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:28.768 08:05:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:29.026 08:05:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:29.026 [2024-06-10 08:05:50.875469] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.283 08:05:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:29.283 08:05:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:29.542 08:05:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid=0b063e5e-64f6-4b4f-b15f-bd51b74609ab -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:29.801 08:05:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:29.801 08:05:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:10:29.801 08:05:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:29.801 08:05:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:10:29.801 08:05:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:10:29.801 08:05:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:10:31.704 08:05:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:31.704 08:05:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:31.704 08:05:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:31.704 08:05:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:10:31.704 08:05:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:31.704 08:05:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:10:31.704 08:05:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:31.704 [global] 00:10:31.704 thread=1 00:10:31.704 invalidate=1 00:10:31.704 rw=write 00:10:31.704 time_based=1 00:10:31.704 runtime=1 00:10:31.704 ioengine=libaio 00:10:31.704 direct=1 00:10:31.704 bs=4096 00:10:31.704 iodepth=1 00:10:31.704 norandommap=0 00:10:31.704 numjobs=1 00:10:31.704 00:10:31.704 verify_dump=1 00:10:31.704 verify_backlog=512 00:10:31.704 verify_state_save=0 00:10:31.704 do_verify=1 00:10:31.704 verify=crc32c-intel 00:10:31.704 [job0] 00:10:31.704 filename=/dev/nvme0n1 00:10:31.704 [job1] 00:10:31.704 filename=/dev/nvme0n2 00:10:31.704 [job2] 00:10:31.704 filename=/dev/nvme0n3 00:10:31.963 [job3] 00:10:31.963 filename=/dev/nvme0n4 00:10:31.963 Could not set queue depth (nvme0n1) 00:10:31.963 Could not set queue depth (nvme0n2) 00:10:31.963 Could not set queue depth (nvme0n3) 00:10:31.964 Could not set queue depth (nvme0n4) 00:10:31.964 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.964 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.964 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.964 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.964 fio-3.35 00:10:31.964 Starting 4 threads 00:10:33.340 00:10:33.340 job0: (groupid=0, jobs=1): err= 0: pid=68592: Mon Jun 10 08:05:54 2024 00:10:33.340 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:33.340 slat (nsec): min=11963, max=55004, avg=14764.31, stdev=4016.90 00:10:33.340 clat (usec): min=132, max=1539, avg=201.99, stdev=36.33 00:10:33.340 lat (usec): min=145, max=1555, avg=216.76, stdev=36.61 00:10:33.340 clat percentiles (usec): 00:10:33.340 | 1.00th=[ 153], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 180], 00:10:33.340 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 206], 00:10:33.340 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 235], 95.00th=[ 247], 00:10:33.340 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 297], 99.95th=[ 326], 00:10:33.340 | 99.99th=[ 1532] 00:10:33.340 write: IOPS=2608, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec); 0 zone resets 00:10:33.340 slat (usec): min=15, max=123, avg=22.32, stdev= 6.15 00:10:33.340 clat (usec): min=89, max=2450, avg=144.45, stdev=59.40 00:10:33.340 lat (usec): min=110, max=2469, avg=166.77, stdev=59.91 00:10:33.340 clat percentiles (usec): 00:10:33.340 | 1.00th=[ 103], 5.00th=[ 113], 10.00th=[ 118], 20.00th=[ 125], 00:10:33.340 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 141], 60.00th=[ 145], 00:10:33.340 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 172], 95.00th=[ 184], 00:10:33.340 | 99.00th=[ 206], 99.50th=[ 219], 99.90th=[ 594], 99.95th=[ 1663], 00:10:33.340 | 99.99th=[ 2442] 00:10:33.340 bw ( KiB/s): min=12288, max=12288, per=37.31%, avg=12288.00, stdev= 0.00, samples=1 00:10:33.340 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:33.340 lat (usec) : 100=0.29%, 250=97.66%, 500=1.97%, 750=0.02% 00:10:33.340 lat (msec) : 2=0.04%, 4=0.02% 00:10:33.340 cpu : usr=2.20%, sys=7.50%, ctx=5171, majf=0, minf=9 00:10:33.340 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.340 issued rwts: total=2560,2611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.340 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.340 job1: (groupid=0, jobs=1): err= 0: pid=68593: Mon Jun 10 08:05:54 2024 00:10:33.340 read: IOPS=2136, BW=8547KiB/s (8753kB/s)(8556KiB/1001msec) 00:10:33.340 slat (nsec): min=11397, max=51030, avg=15574.38, stdev=4168.60 00:10:33.340 clat (usec): min=149, max=7135, avg=223.90, stdev=223.07 00:10:33.340 lat (usec): min=163, max=7148, avg=239.47, stdev=223.29 00:10:33.340 clat percentiles (usec): 00:10:33.340 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 188], 00:10:33.340 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 215], 00:10:33.340 | 70.00th=[ 223], 80.00th=[ 233], 90.00th=[ 251], 95.00th=[ 281], 00:10:33.340 | 99.00th=[ 424], 99.50th=[ 453], 99.90th=[ 3195], 99.95th=[ 6783], 00:10:33.340 | 99.99th=[ 7111] 00:10:33.340 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:33.340 slat (usec): min=14, max=105, avg=24.10, stdev= 6.70 00:10:33.340 clat (usec): min=100, max=5834, avg=162.86, stdev=124.03 00:10:33.340 lat (usec): min=119, max=5860, avg=186.97, stdev=124.61 00:10:33.340 clat percentiles (usec): 00:10:33.340 | 1.00th=[ 111], 5.00th=[ 120], 10.00th=[ 126], 20.00th=[ 135], 00:10:33.340 | 30.00th=[ 141], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 161], 00:10:33.340 | 70.00th=[ 167], 80.00th=[ 178], 90.00th=[ 200], 95.00th=[ 225], 00:10:33.340 | 99.00th=[ 302], 99.50th=[ 322], 99.90th=[ 996], 99.95th=[ 2008], 00:10:33.340 | 99.99th=[ 5866] 00:10:33.340 bw ( KiB/s): min=11784, max=11784, per=35.78%, avg=11784.00, stdev= 0.00, samples=1 00:10:33.340 iops : min= 2946, max= 2946, avg=2946.00, stdev= 0.00, samples=1 00:10:33.340 lat (usec) : 250=93.66%, 500=6.15%, 750=0.02%, 1000=0.04% 00:10:33.340 lat (msec) : 2=0.02%, 4=0.04%, 10=0.06% 00:10:33.340 cpu : usr=2.20%, sys=7.30%, ctx=4700, majf=0, minf=5 00:10:33.340 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.340 issued rwts: total=2139,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.340 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.340 job2: (groupid=0, jobs=1): err= 0: pid=68594: Mon Jun 10 08:05:54 2024 00:10:33.340 read: IOPS=1298, BW=5195KiB/s (5319kB/s)(5200KiB/1001msec) 00:10:33.340 slat (nsec): min=15365, max=71915, avg=22945.52, stdev=6001.48 00:10:33.340 clat (usec): min=209, max=2138, avg=368.22, stdev=78.40 00:10:33.340 lat (usec): min=230, max=2159, avg=391.17, stdev=78.68 00:10:33.340 clat percentiles (usec): 00:10:33.340 | 1.00th=[ 273], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 326], 00:10:33.340 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 363], 00:10:33.340 | 70.00th=[ 375], 80.00th=[ 396], 90.00th=[ 453], 95.00th=[ 494], 00:10:33.340 | 99.00th=[ 594], 99.50th=[ 660], 99.90th=[ 742], 99.95th=[ 2147], 00:10:33.340 | 99.99th=[ 2147] 00:10:33.340 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:33.340 slat (usec): min=22, max=127, avg=38.71, stdev=11.05 00:10:33.340 clat (usec): min=121, max=3089, avg=275.68, stdev=100.90 00:10:33.340 lat (usec): min=148, max=3128, avg=314.39, stdev=103.73 00:10:33.340 clat percentiles (usec): 00:10:33.340 | 1.00th=[ 143], 5.00th=[ 169], 10.00th=[ 206], 20.00th=[ 235], 00:10:33.340 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 273], 00:10:33.340 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 359], 95.00th=[ 424], 00:10:33.340 | 99.00th=[ 486], 99.50th=[ 494], 99.90th=[ 1037], 99.95th=[ 3097], 00:10:33.340 | 99.99th=[ 3097] 00:10:33.340 bw ( KiB/s): min= 7560, max= 7560, per=22.95%, avg=7560.00, stdev= 0.00, samples=1 00:10:33.340 iops : min= 1890, max= 1890, avg=1890.00, stdev= 0.00, samples=1 00:10:33.340 lat (usec) : 250=18.41%, 500=79.37%, 750=2.05%, 1000=0.07% 00:10:33.340 lat (msec) : 2=0.04%, 4=0.07% 00:10:33.340 cpu : usr=1.70%, sys=7.20%, ctx=2841, majf=0, minf=8 00:10:33.340 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.340 issued rwts: total=1300,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.340 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.340 job3: (groupid=0, jobs=1): err= 0: pid=68595: Mon Jun 10 08:05:54 2024 00:10:33.340 read: IOPS=1279, BW=5119KiB/s (5242kB/s)(5124KiB/1001msec) 00:10:33.340 slat (nsec): min=15176, max=89247, avg=26503.53, stdev=9291.85 00:10:33.340 clat (usec): min=253, max=2137, avg=399.99, stdev=125.11 00:10:33.340 lat (usec): min=290, max=2157, avg=426.49, stdev=130.17 00:10:33.340 clat percentiles (usec): 00:10:33.340 | 1.00th=[ 281], 5.00th=[ 302], 10.00th=[ 314], 20.00th=[ 326], 00:10:33.340 | 30.00th=[ 338], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 367], 00:10:33.340 | 70.00th=[ 379], 80.00th=[ 461], 90.00th=[ 603], 95.00th=[ 660], 00:10:33.340 | 99.00th=[ 766], 99.50th=[ 791], 99.90th=[ 955], 99.95th=[ 2147], 00:10:33.340 | 99.99th=[ 2147] 00:10:33.340 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:33.340 slat (usec): min=22, max=107, avg=36.40, stdev= 8.51 00:10:33.341 clat (usec): min=104, max=783, avg=253.24, stdev=51.55 00:10:33.341 lat (usec): min=128, max=809, avg=289.64, stdev=53.13 00:10:33.341 clat percentiles (usec): 00:10:33.341 | 1.00th=[ 131], 5.00th=[ 153], 10.00th=[ 178], 20.00th=[ 225], 00:10:33.341 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 265], 00:10:33.341 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 330], 00:10:33.341 | 99.00th=[ 379], 99.50th=[ 408], 99.90th=[ 437], 99.95th=[ 783], 00:10:33.341 | 99.99th=[ 783] 00:10:33.341 bw ( KiB/s): min= 8040, max= 8040, per=24.41%, avg=8040.00, stdev= 0.00, samples=1 00:10:33.341 iops : min= 2010, max= 2010, avg=2010.00, stdev= 0.00, samples=1 00:10:33.341 lat (usec) : 250=22.65%, 500=69.51%, 750=7.28%, 1000=0.53% 00:10:33.341 lat (msec) : 4=0.04% 00:10:33.341 cpu : usr=1.90%, sys=7.10%, ctx=2823, majf=0, minf=13 00:10:33.341 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.341 issued rwts: total=1281,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.341 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.341 00:10:33.341 Run status group 0 (all jobs): 00:10:33.341 READ: bw=28.4MiB/s (29.8MB/s), 5119KiB/s-9.99MiB/s (5242kB/s-10.5MB/s), io=28.4MiB (29.8MB), run=1001-1001msec 00:10:33.341 WRITE: bw=32.2MiB/s (33.7MB/s), 6138KiB/s-10.2MiB/s (6285kB/s-10.7MB/s), io=32.2MiB (33.8MB), run=1001-1001msec 00:10:33.341 00:10:33.341 Disk stats (read/write): 00:10:33.341 nvme0n1: ios=2097/2345, merge=0/0, ticks=467/388, in_queue=855, util=86.95% 00:10:33.341 nvme0n2: ios=2048/2095, merge=0/0, ticks=462/356, in_queue=818, util=86.28% 00:10:33.341 nvme0n3: ios=1024/1420, merge=0/0, ticks=379/428, in_queue=807, util=88.80% 00:10:33.341 nvme0n4: ios=1038/1536, merge=0/0, ticks=394/436, in_queue=830, util=89.57% 00:10:33.341 08:05:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:33.341 [global] 00:10:33.341 thread=1 00:10:33.341 invalidate=1 00:10:33.341 rw=randwrite 00:10:33.341 time_based=1 00:10:33.341 runtime=1 00:10:33.341 ioengine=libaio 00:10:33.341 direct=1 00:10:33.341 bs=4096 00:10:33.341 iodepth=1 00:10:33.341 norandommap=0 00:10:33.341 numjobs=1 00:10:33.341 00:10:33.341 verify_dump=1 00:10:33.341 verify_backlog=512 00:10:33.341 verify_state_save=0 00:10:33.341 do_verify=1 00:10:33.341 verify=crc32c-intel 00:10:33.341 [job0] 00:10:33.341 filename=/dev/nvme0n1 00:10:33.341 [job1] 00:10:33.341 filename=/dev/nvme0n2 00:10:33.341 [job2] 00:10:33.341 filename=/dev/nvme0n3 00:10:33.341 [job3] 00:10:33.341 filename=/dev/nvme0n4 00:10:33.341 Could not set queue depth (nvme0n1) 00:10:33.341 Could not set queue depth (nvme0n2) 00:10:33.341 Could not set queue depth (nvme0n3) 00:10:33.341 Could not set queue depth (nvme0n4) 00:10:33.341 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.341 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.341 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.341 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.341 fio-3.35 00:10:33.341 Starting 4 threads 00:10:34.743 00:10:34.743 job0: (groupid=0, jobs=1): err= 0: pid=68648: Mon Jun 10 08:05:56 2024 00:10:34.743 read: IOPS=1544, BW=6178KiB/s (6326kB/s)(6184KiB/1001msec) 00:10:34.743 slat (nsec): min=11848, max=58784, avg=17577.99, stdev=5815.14 00:10:34.743 clat (usec): min=143, max=584, avg=281.00, stdev=61.57 00:10:34.743 lat (usec): min=156, max=617, avg=298.58, stdev=63.70 00:10:34.743 clat percentiles (usec): 00:10:34.743 | 1.00th=[ 165], 5.00th=[ 184], 10.00th=[ 202], 20.00th=[ 229], 00:10:34.743 | 30.00th=[ 247], 40.00th=[ 265], 50.00th=[ 281], 60.00th=[ 293], 00:10:34.743 | 70.00th=[ 310], 80.00th=[ 330], 90.00th=[ 355], 95.00th=[ 375], 00:10:34.743 | 99.00th=[ 474], 99.50th=[ 523], 99.90th=[ 553], 99.95th=[ 586], 00:10:34.743 | 99.99th=[ 586] 00:10:34.743 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:34.743 slat (usec): min=14, max=111, avg=29.81, stdev=11.84 00:10:34.743 clat (usec): min=96, max=8017, avg=228.89, stdev=301.69 00:10:34.743 lat (usec): min=115, max=8038, avg=258.70, stdev=303.57 00:10:34.743 clat percentiles (usec): 00:10:34.743 | 1.00th=[ 110], 5.00th=[ 122], 10.00th=[ 133], 20.00th=[ 147], 00:10:34.743 | 30.00th=[ 163], 40.00th=[ 182], 50.00th=[ 200], 60.00th=[ 221], 00:10:34.743 | 70.00th=[ 239], 80.00th=[ 265], 90.00th=[ 310], 95.00th=[ 392], 00:10:34.743 | 99.00th=[ 474], 99.50th=[ 506], 99.90th=[ 6128], 99.95th=[ 6259], 00:10:34.743 | 99.99th=[ 8029] 00:10:34.743 bw ( KiB/s): min= 8240, max= 8240, per=26.28%, avg=8240.00, stdev= 0.00, samples=1 00:10:34.743 iops : min= 2060, max= 2060, avg=2060.00, stdev= 0.00, samples=1 00:10:34.743 lat (usec) : 100=0.11%, 250=55.82%, 500=43.38%, 750=0.50%, 1000=0.03% 00:10:34.743 lat (msec) : 4=0.06%, 10=0.11% 00:10:34.743 cpu : usr=1.90%, sys=6.90%, ctx=3598, majf=0, minf=10 00:10:34.743 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.743 issued rwts: total=1546,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.743 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.743 job1: (groupid=0, jobs=1): err= 0: pid=68649: Mon Jun 10 08:05:56 2024 00:10:34.743 read: IOPS=1606, BW=6426KiB/s (6580kB/s)(6432KiB/1001msec) 00:10:34.743 slat (nsec): min=14434, max=89983, avg=21796.68, stdev=8041.75 00:10:34.743 clat (usec): min=148, max=2126, avg=298.80, stdev=101.00 00:10:34.743 lat (usec): min=166, max=2142, avg=320.59, stdev=104.70 00:10:34.743 clat percentiles (usec): 00:10:34.743 | 1.00th=[ 176], 5.00th=[ 196], 10.00th=[ 215], 20.00th=[ 237], 00:10:34.743 | 30.00th=[ 253], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 297], 00:10:34.743 | 70.00th=[ 318], 80.00th=[ 338], 90.00th=[ 367], 95.00th=[ 502], 00:10:34.743 | 99.00th=[ 660], 99.50th=[ 701], 99.90th=[ 775], 99.95th=[ 2114], 00:10:34.743 | 99.99th=[ 2114] 00:10:34.743 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:34.743 slat (usec): min=17, max=183, avg=29.36, stdev= 9.53 00:10:34.743 clat (usec): min=95, max=543, avg=202.75, stdev=57.89 00:10:34.743 lat (usec): min=124, max=571, avg=232.11, stdev=60.71 00:10:34.743 clat percentiles (usec): 00:10:34.743 | 1.00th=[ 113], 5.00th=[ 126], 10.00th=[ 135], 20.00th=[ 149], 00:10:34.743 | 30.00th=[ 165], 40.00th=[ 182], 50.00th=[ 198], 60.00th=[ 215], 00:10:34.743 | 70.00th=[ 229], 80.00th=[ 249], 90.00th=[ 277], 95.00th=[ 293], 00:10:34.743 | 99.00th=[ 388], 99.50th=[ 416], 99.90th=[ 506], 99.95th=[ 519], 00:10:34.743 | 99.99th=[ 545] 00:10:34.744 bw ( KiB/s): min= 8192, max= 8192, per=26.13%, avg=8192.00, stdev= 0.00, samples=1 00:10:34.744 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:34.744 lat (usec) : 100=0.03%, 250=57.52%, 500=40.15%, 750=2.13%, 1000=0.14% 00:10:34.744 lat (msec) : 4=0.03% 00:10:34.744 cpu : usr=1.70%, sys=7.80%, ctx=3666, majf=0, minf=15 00:10:34.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.744 issued rwts: total=1608,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.744 job2: (groupid=0, jobs=1): err= 0: pid=68650: Mon Jun 10 08:05:56 2024 00:10:34.744 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:34.744 slat (nsec): min=15364, max=83445, avg=24021.64, stdev=8358.20 00:10:34.744 clat (usec): min=209, max=472, avg=320.23, stdev=42.97 00:10:34.744 lat (usec): min=229, max=508, avg=344.25, stdev=44.03 00:10:34.744 clat percentiles (usec): 00:10:34.744 | 1.00th=[ 239], 5.00th=[ 255], 10.00th=[ 265], 20.00th=[ 281], 00:10:34.744 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 330], 00:10:34.744 | 70.00th=[ 343], 80.00th=[ 359], 90.00th=[ 375], 95.00th=[ 392], 00:10:34.744 | 99.00th=[ 424], 99.50th=[ 445], 99.90th=[ 469], 99.95th=[ 474], 00:10:34.744 | 99.99th=[ 474] 00:10:34.744 write: IOPS=1607, BW=6430KiB/s (6584kB/s)(6436KiB/1001msec); 0 zone resets 00:10:34.744 slat (usec): min=20, max=115, avg=38.01, stdev=12.98 00:10:34.744 clat (usec): min=155, max=1475, avg=248.46, stdev=50.75 00:10:34.744 lat (usec): min=186, max=1540, avg=286.47, stdev=53.96 00:10:34.744 clat percentiles (usec): 00:10:34.744 | 1.00th=[ 172], 5.00th=[ 190], 10.00th=[ 200], 20.00th=[ 215], 00:10:34.744 | 30.00th=[ 225], 40.00th=[ 235], 50.00th=[ 245], 60.00th=[ 255], 00:10:34.744 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 318], 00:10:34.744 | 99.00th=[ 367], 99.50th=[ 379], 99.90th=[ 619], 99.95th=[ 1483], 00:10:34.744 | 99.99th=[ 1483] 00:10:34.744 bw ( KiB/s): min= 8175, max= 8175, per=26.07%, avg=8175.00, stdev= 0.00, samples=1 00:10:34.744 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:10:34.744 lat (usec) : 250=29.76%, 500=70.17%, 750=0.03% 00:10:34.744 lat (msec) : 2=0.03% 00:10:34.744 cpu : usr=2.20%, sys=7.80%, ctx=3148, majf=0, minf=11 00:10:34.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.744 issued rwts: total=1536,1609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.744 job3: (groupid=0, jobs=1): err= 0: pid=68651: Mon Jun 10 08:05:56 2024 00:10:34.744 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:34.744 slat (nsec): min=11411, max=62222, avg=15939.41, stdev=5743.53 00:10:34.744 clat (usec): min=151, max=2486, avg=250.65, stdev=71.02 00:10:34.744 lat (usec): min=164, max=2498, avg=266.59, stdev=71.55 00:10:34.744 clat percentiles (usec): 00:10:34.744 | 1.00th=[ 167], 5.00th=[ 184], 10.00th=[ 194], 20.00th=[ 208], 00:10:34.744 | 30.00th=[ 219], 40.00th=[ 231], 50.00th=[ 243], 60.00th=[ 255], 00:10:34.744 | 70.00th=[ 269], 80.00th=[ 289], 90.00th=[ 318], 95.00th=[ 338], 00:10:34.744 | 99.00th=[ 375], 99.50th=[ 416], 99.90th=[ 668], 99.95th=[ 816], 00:10:34.744 | 99.99th=[ 2474] 00:10:34.744 write: IOPS=2138, BW=8555KiB/s (8761kB/s)(8564KiB/1001msec); 0 zone resets 00:10:34.744 slat (nsec): min=15073, max=90397, avg=24204.54, stdev=7750.61 00:10:34.744 clat (usec): min=92, max=580, avg=184.18, stdev=43.89 00:10:34.744 lat (usec): min=110, max=615, avg=208.39, stdev=46.27 00:10:34.744 clat percentiles (usec): 00:10:34.744 | 1.00th=[ 113], 5.00th=[ 127], 10.00th=[ 135], 20.00th=[ 149], 00:10:34.744 | 30.00th=[ 157], 40.00th=[ 167], 50.00th=[ 176], 60.00th=[ 188], 00:10:34.744 | 70.00th=[ 204], 80.00th=[ 219], 90.00th=[ 245], 95.00th=[ 265], 00:10:34.744 | 99.00th=[ 302], 99.50th=[ 338], 99.90th=[ 363], 99.95th=[ 375], 00:10:34.744 | 99.99th=[ 578] 00:10:34.744 bw ( KiB/s): min= 8192, max= 8192, per=26.13%, avg=8192.00, stdev= 0.00, samples=1 00:10:34.744 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:34.744 lat (usec) : 100=0.02%, 250=74.15%, 500=25.71%, 750=0.07%, 1000=0.02% 00:10:34.744 lat (msec) : 4=0.02% 00:10:34.744 cpu : usr=1.80%, sys=6.70%, ctx=4189, majf=0, minf=9 00:10:34.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.744 issued rwts: total=2048,2141,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.744 00:10:34.744 Run status group 0 (all jobs): 00:10:34.744 READ: bw=26.3MiB/s (27.6MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=26.3MiB (27.6MB), run=1001-1001msec 00:10:34.744 WRITE: bw=30.6MiB/s (32.1MB/s), 6430KiB/s-8555KiB/s (6584kB/s-8761kB/s), io=30.6MiB (32.1MB), run=1001-1001msec 00:10:34.744 00:10:34.744 Disk stats (read/write): 00:10:34.744 nvme0n1: ios=1586/1570, merge=0/0, ticks=464/344, in_queue=808, util=86.46% 00:10:34.744 nvme0n2: ios=1569/1642, merge=0/0, ticks=472/353, in_queue=825, util=88.11% 00:10:34.744 nvme0n3: ios=1203/1536, merge=0/0, ticks=388/418, in_queue=806, util=89.10% 00:10:34.744 nvme0n4: ios=1536/2046, merge=0/0, ticks=395/392, in_queue=787, util=89.75% 00:10:34.744 08:05:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:34.744 [global] 00:10:34.744 thread=1 00:10:34.744 invalidate=1 00:10:34.744 rw=write 00:10:34.744 time_based=1 00:10:34.744 runtime=1 00:10:34.744 ioengine=libaio 00:10:34.744 direct=1 00:10:34.744 bs=4096 00:10:34.744 iodepth=128 00:10:34.744 norandommap=0 00:10:34.744 numjobs=1 00:10:34.744 00:10:34.744 verify_dump=1 00:10:34.744 verify_backlog=512 00:10:34.744 verify_state_save=0 00:10:34.744 do_verify=1 00:10:34.744 verify=crc32c-intel 00:10:34.744 [job0] 00:10:34.744 filename=/dev/nvme0n1 00:10:34.744 [job1] 00:10:34.745 filename=/dev/nvme0n2 00:10:34.745 [job2] 00:10:34.745 filename=/dev/nvme0n3 00:10:34.745 [job3] 00:10:34.745 filename=/dev/nvme0n4 00:10:34.745 Could not set queue depth (nvme0n1) 00:10:34.745 Could not set queue depth (nvme0n2) 00:10:34.745 Could not set queue depth (nvme0n3) 00:10:34.745 Could not set queue depth (nvme0n4) 00:10:34.745 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.745 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.745 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.745 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.745 fio-3.35 00:10:34.745 Starting 4 threads 00:10:36.120 00:10:36.120 job0: (groupid=0, jobs=1): err= 0: pid=68705: Mon Jun 10 08:05:57 2024 00:10:36.120 read: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec) 00:10:36.120 slat (usec): min=6, max=16521, avg=226.72, stdev=1104.64 00:10:36.120 clat (usec): min=11301, max=55309, avg=29374.34, stdev=13804.49 00:10:36.120 lat (usec): min=11319, max=61525, avg=29601.06, stdev=13893.65 00:10:36.120 clat percentiles (usec): 00:10:36.120 | 1.00th=[13042], 5.00th=[13698], 10.00th=[15008], 20.00th=[15664], 00:10:36.120 | 30.00th=[16188], 40.00th=[17171], 50.00th=[23725], 60.00th=[39060], 00:10:36.120 | 70.00th=[40633], 80.00th=[41681], 90.00th=[47449], 95.00th=[50070], 00:10:36.120 | 99.00th=[55313], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:10:36.120 | 99.99th=[55313] 00:10:36.120 write: IOPS=2270, BW=9081KiB/s (9299kB/s)(9172KiB/1010msec); 0 zone resets 00:10:36.120 slat (usec): min=12, max=10401, avg=226.42, stdev=1016.98 00:10:36.120 clat (usec): min=6585, max=94390, avg=29294.73, stdev=16001.35 00:10:36.120 lat (usec): min=10257, max=96999, avg=29521.14, stdev=16087.63 00:10:36.120 clat percentiles (usec): 00:10:36.120 | 1.00th=[13829], 5.00th=[14877], 10.00th=[15270], 20.00th=[15926], 00:10:36.120 | 30.00th=[17433], 40.00th=[17695], 50.00th=[22152], 60.00th=[31327], 00:10:36.120 | 70.00th=[36439], 80.00th=[40109], 90.00th=[48497], 95.00th=[60556], 00:10:36.121 | 99.00th=[85459], 99.50th=[87557], 99.90th=[93848], 99.95th=[93848], 00:10:36.121 | 99.99th=[94897] 00:10:36.121 bw ( KiB/s): min= 5021, max=12312, per=24.50%, avg=8666.50, stdev=5155.52, samples=2 00:10:36.121 iops : min= 1255, max= 3078, avg=2166.50, stdev=1289.06, samples=2 00:10:36.121 lat (msec) : 10=0.02%, 20=48.68%, 50=43.65%, 100=7.65% 00:10:36.121 cpu : usr=2.18%, sys=7.23%, ctx=371, majf=0, minf=1 00:10:36.121 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:10:36.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.121 issued rwts: total=2048,2293,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.121 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.121 job1: (groupid=0, jobs=1): err= 0: pid=68706: Mon Jun 10 08:05:57 2024 00:10:36.121 read: IOPS=1519, BW=6077KiB/s (6223kB/s)(6144KiB/1011msec) 00:10:36.121 slat (usec): min=6, max=16919, avg=332.34, stdev=1334.77 00:10:36.121 clat (usec): min=19466, max=99405, avg=44653.02, stdev=13192.31 00:10:36.121 lat (usec): min=25891, max=99417, avg=44985.36, stdev=13218.11 00:10:36.121 clat percentiles (usec): 00:10:36.121 | 1.00th=[25822], 5.00th=[28967], 10.00th=[30802], 20.00th=[37487], 00:10:36.121 | 30.00th=[38536], 40.00th=[40109], 50.00th=[41157], 60.00th=[41681], 00:10:36.121 | 70.00th=[43254], 80.00th=[55313], 90.00th=[64226], 95.00th=[71828], 00:10:36.121 | 99.00th=[87557], 99.50th=[87557], 99.90th=[96994], 99.95th=[99091], 00:10:36.121 | 99.99th=[99091] 00:10:36.121 write: IOPS=1605, BW=6421KiB/s (6575kB/s)(6492KiB/1011msec); 0 zone resets 00:10:36.121 slat (usec): min=15, max=10815, avg=292.21, stdev=1326.36 00:10:36.121 clat (usec): min=10340, max=68316, avg=35911.74, stdev=6874.92 00:10:36.121 lat (usec): min=13220, max=68348, avg=36203.95, stdev=6798.14 00:10:36.121 clat percentiles (usec): 00:10:36.121 | 1.00th=[20317], 5.00th=[26870], 10.00th=[29230], 20.00th=[31065], 00:10:36.121 | 30.00th=[32113], 40.00th=[33424], 50.00th=[35914], 60.00th=[37487], 00:10:36.121 | 70.00th=[38011], 80.00th=[40109], 90.00th=[43779], 95.00th=[45876], 00:10:36.121 | 99.00th=[64226], 99.50th=[67634], 99.90th=[68682], 99.95th=[68682], 00:10:36.121 | 99.99th=[68682] 00:10:36.121 bw ( KiB/s): min= 4087, max= 8192, per=17.35%, avg=6139.50, stdev=2902.67, samples=2 00:10:36.121 iops : min= 1021, max= 2048, avg=1534.50, stdev=726.20, samples=2 00:10:36.121 lat (msec) : 20=0.47%, 50=86.96%, 100=12.57% 00:10:36.121 cpu : usr=2.08%, sys=5.45%, ctx=271, majf=0, minf=5 00:10:36.121 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:10:36.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.121 issued rwts: total=1536,1623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.121 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.121 job2: (groupid=0, jobs=1): err= 0: pid=68707: Mon Jun 10 08:05:57 2024 00:10:36.121 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:10:36.121 slat (usec): min=10, max=5623, avg=147.36, stdev=723.28 00:10:36.121 clat (usec): min=13383, max=24716, avg=19544.76, stdev=2087.71 00:10:36.121 lat (usec): min=16680, max=24742, avg=19692.12, stdev=1975.37 00:10:36.121 clat percentiles (usec): 00:10:36.121 | 1.00th=[14746], 5.00th=[16909], 10.00th=[17171], 20.00th=[18220], 00:10:36.121 | 30.00th=[18744], 40.00th=[19006], 50.00th=[19006], 60.00th=[19268], 00:10:36.121 | 70.00th=[19530], 80.00th=[20841], 90.00th=[23462], 95.00th=[24249], 00:10:36.121 | 99.00th=[24511], 99.50th=[24773], 99.90th=[24773], 99.95th=[24773], 00:10:36.121 | 99.99th=[24773] 00:10:36.121 write: IOPS=3478, BW=13.6MiB/s (14.2MB/s)(13.6MiB/1003msec); 0 zone resets 00:10:36.121 slat (usec): min=11, max=5779, avg=148.91, stdev=677.97 00:10:36.121 clat (usec): min=242, max=24021, avg=18958.65, stdev=2760.93 00:10:36.121 lat (usec): min=4186, max=24048, avg=19107.56, stdev=2687.82 00:10:36.121 clat percentiles (usec): 00:10:36.121 | 1.00th=[ 8455], 5.00th=[16319], 10.00th=[17171], 20.00th=[17433], 00:10:36.121 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18744], 60.00th=[19006], 00:10:36.121 | 70.00th=[19268], 80.00th=[21103], 90.00th=[23200], 95.00th=[23462], 00:10:36.121 | 99.00th=[23725], 99.50th=[23987], 99.90th=[23987], 99.95th=[23987], 00:10:36.121 | 99.99th=[23987] 00:10:36.121 bw ( KiB/s): min=12312, max=14570, per=38.00%, avg=13441.00, stdev=1596.65, samples=2 00:10:36.121 iops : min= 3078, max= 3642, avg=3360.00, stdev=398.81, samples=2 00:10:36.121 lat (usec) : 250=0.02% 00:10:36.121 lat (msec) : 10=0.98%, 20=73.92%, 50=25.09% 00:10:36.121 cpu : usr=3.99%, sys=10.98%, ctx=206, majf=0, minf=2 00:10:36.121 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:36.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.121 issued rwts: total=3072,3489,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.121 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.121 job3: (groupid=0, jobs=1): err= 0: pid=68708: Mon Jun 10 08:05:57 2024 00:10:36.121 read: IOPS=1501, BW=6006KiB/s (6150kB/s)(6036KiB/1005msec) 00:10:36.121 slat (usec): min=7, max=12382, avg=258.97, stdev=1137.93 00:10:36.121 clat (usec): min=1632, max=71990, avg=32142.17, stdev=12053.94 00:10:36.121 lat (usec): min=4903, max=72017, avg=32401.14, stdev=12142.45 00:10:36.121 clat percentiles (usec): 00:10:36.121 | 1.00th=[ 5276], 5.00th=[15139], 10.00th=[21103], 20.00th=[22676], 00:10:36.121 | 30.00th=[27132], 40.00th=[28443], 50.00th=[30802], 60.00th=[32375], 00:10:36.121 | 70.00th=[33817], 80.00th=[38011], 90.00th=[50594], 95.00th=[60031], 00:10:36.121 | 99.00th=[65274], 99.50th=[67634], 99.90th=[71828], 99.95th=[71828], 00:10:36.121 | 99.99th=[71828] 00:10:36.121 write: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec); 0 zone resets 00:10:36.121 slat (usec): min=18, max=12139, avg=387.76, stdev=1384.48 00:10:36.121 clat (msec): min=27, max=106, avg=50.34, stdev=17.77 00:10:36.121 lat (msec): min=27, max=106, avg=50.73, stdev=17.87 00:10:36.121 clat percentiles (msec): 00:10:36.121 | 1.00th=[ 28], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 35], 00:10:36.121 | 30.00th=[ 37], 40.00th=[ 48], 50.00th=[ 53], 60.00th=[ 55], 00:10:36.121 | 70.00th=[ 56], 80.00th=[ 58], 90.00th=[ 72], 95.00th=[ 93], 00:10:36.121 | 99.00th=[ 105], 99.50th=[ 107], 99.90th=[ 107], 99.95th=[ 107], 00:10:36.121 | 99.99th=[ 107] 00:10:36.121 bw ( KiB/s): min= 5333, max= 6957, per=17.37%, avg=6145.00, stdev=1148.34, samples=2 00:10:36.121 iops : min= 1333, max= 1739, avg=1536.00, stdev=287.09, samples=2 00:10:36.121 lat (msec) : 2=0.03%, 10=1.38%, 20=2.10%, 50=62.43%, 100=32.61% 00:10:36.121 lat (msec) : 250=1.44% 00:10:36.121 cpu : usr=1.49%, sys=6.37%, ctx=219, majf=0, minf=9 00:10:36.121 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:10:36.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.121 issued rwts: total=1509,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.121 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.121 00:10:36.121 Run status group 0 (all jobs): 00:10:36.121 READ: bw=31.5MiB/s (33.1MB/s), 6006KiB/s-12.0MiB/s (6150kB/s-12.5MB/s), io=31.9MiB (33.4MB), run=1003-1011msec 00:10:36.121 WRITE: bw=34.5MiB/s (36.2MB/s), 6113KiB/s-13.6MiB/s (6260kB/s-14.2MB/s), io=34.9MiB (36.6MB), run=1003-1011msec 00:10:36.121 00:10:36.121 Disk stats (read/write): 00:10:36.121 nvme0n1: ios=1983/2048, merge=0/0, ticks=14773/12873, in_queue=27646, util=87.16% 00:10:36.121 nvme0n2: ios=1329/1536, merge=0/0, ticks=13358/12336, in_queue=25694, util=87.94% 00:10:36.121 nvme0n3: ios=2560/2976, merge=0/0, ticks=11752/12900, in_queue=24652, util=89.09% 00:10:36.121 nvme0n4: ios=1024/1383, merge=0/0, ticks=12135/22612, in_queue=34747, util=89.64% 00:10:36.121 08:05:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:36.121 [global] 00:10:36.121 thread=1 00:10:36.121 invalidate=1 00:10:36.121 rw=randwrite 00:10:36.122 time_based=1 00:10:36.122 runtime=1 00:10:36.122 ioengine=libaio 00:10:36.122 direct=1 00:10:36.122 bs=4096 00:10:36.122 iodepth=128 00:10:36.122 norandommap=0 00:10:36.122 numjobs=1 00:10:36.122 00:10:36.122 verify_dump=1 00:10:36.122 verify_backlog=512 00:10:36.122 verify_state_save=0 00:10:36.122 do_verify=1 00:10:36.122 verify=crc32c-intel 00:10:36.122 [job0] 00:10:36.122 filename=/dev/nvme0n1 00:10:36.122 [job1] 00:10:36.122 filename=/dev/nvme0n2 00:10:36.122 [job2] 00:10:36.122 filename=/dev/nvme0n3 00:10:36.122 [job3] 00:10:36.122 filename=/dev/nvme0n4 00:10:36.122 Could not set queue depth (nvme0n1) 00:10:36.122 Could not set queue depth (nvme0n2) 00:10:36.122 Could not set queue depth (nvme0n3) 00:10:36.122 Could not set queue depth (nvme0n4) 00:10:36.122 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:36.122 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:36.122 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:36.122 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:36.122 fio-3.35 00:10:36.122 Starting 4 threads 00:10:37.497 00:10:37.497 job0: (groupid=0, jobs=1): err= 0: pid=68768: Mon Jun 10 08:05:59 2024 00:10:37.497 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:10:37.497 slat (usec): min=7, max=7975, avg=134.31, stdev=607.41 00:10:37.497 clat (usec): min=11265, max=26509, avg=17396.30, stdev=1869.10 00:10:37.497 lat (usec): min=12743, max=26544, avg=17530.61, stdev=1884.43 00:10:37.497 clat percentiles (usec): 00:10:37.497 | 1.00th=[12911], 5.00th=[14746], 10.00th=[15139], 20.00th=[15926], 00:10:37.497 | 30.00th=[16450], 40.00th=[16909], 50.00th=[17433], 60.00th=[17695], 00:10:37.497 | 70.00th=[18220], 80.00th=[18744], 90.00th=[19530], 95.00th=[20579], 00:10:37.497 | 99.00th=[22938], 99.50th=[23987], 99.90th=[24773], 99.95th=[24773], 00:10:37.497 | 99.99th=[26608] 00:10:37.497 write: IOPS=3904, BW=15.2MiB/s (16.0MB/s)(15.3MiB/1002msec); 0 zone resets 00:10:37.497 slat (usec): min=10, max=10242, avg=123.77, stdev=761.94 00:10:37.497 clat (usec): min=1294, max=26988, avg=16409.31, stdev=2480.69 00:10:37.497 lat (usec): min=7755, max=27004, avg=16533.08, stdev=2573.84 00:10:37.497 clat percentiles (usec): 00:10:37.497 | 1.00th=[ 8717], 5.00th=[11994], 10.00th=[13829], 20.00th=[14877], 00:10:37.497 | 30.00th=[15401], 40.00th=[15926], 50.00th=[16319], 60.00th=[16909], 00:10:37.497 | 70.00th=[17433], 80.00th=[18220], 90.00th=[18744], 95.00th=[20055], 00:10:37.497 | 99.00th=[23987], 99.50th=[25297], 99.90th=[26870], 99.95th=[26870], 00:10:37.497 | 99.99th=[26870] 00:10:37.497 bw ( KiB/s): min=13896, max=16384, per=34.08%, avg=15140.00, stdev=1759.28, samples=2 00:10:37.497 iops : min= 3474, max= 4096, avg=3785.00, stdev=439.82, samples=2 00:10:37.497 lat (msec) : 2=0.01%, 10=0.97%, 20=93.38%, 50=5.63% 00:10:37.497 cpu : usr=3.50%, sys=12.59%, ctx=231, majf=0, minf=4 00:10:37.497 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:37.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.497 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:37.497 issued rwts: total=3584,3912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.497 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:37.497 job1: (groupid=0, jobs=1): err= 0: pid=68769: Mon Jun 10 08:05:59 2024 00:10:37.497 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:10:37.497 slat (usec): min=7, max=8120, avg=132.98, stdev=652.30 00:10:37.497 clat (usec): min=9920, max=26529, avg=16984.74, stdev=2165.19 00:10:37.497 lat (usec): min=9948, max=26876, avg=17117.72, stdev=2205.64 00:10:37.497 clat percentiles (usec): 00:10:37.497 | 1.00th=[11207], 5.00th=[13435], 10.00th=[14484], 20.00th=[15401], 00:10:37.497 | 30.00th=[16057], 40.00th=[16450], 50.00th=[16909], 60.00th=[17433], 00:10:37.497 | 70.00th=[17957], 80.00th=[18744], 90.00th=[19530], 95.00th=[20317], 00:10:37.497 | 99.00th=[22676], 99.50th=[24249], 99.90th=[24511], 99.95th=[25035], 00:10:37.497 | 99.99th=[26608] 00:10:37.497 write: IOPS=3943, BW=15.4MiB/s (16.2MB/s)(15.5MiB/1008msec); 0 zone resets 00:10:37.497 slat (usec): min=12, max=11159, avg=123.20, stdev=652.39 00:10:37.497 clat (usec): min=6713, max=27007, avg=16705.25, stdev=2423.83 00:10:37.497 lat (usec): min=7391, max=27023, avg=16828.46, stdev=2496.03 00:10:37.497 clat percentiles (usec): 00:10:37.497 | 1.00th=[ 9634], 5.00th=[12649], 10.00th=[14222], 20.00th=[15139], 00:10:37.497 | 30.00th=[15795], 40.00th=[16319], 50.00th=[16712], 60.00th=[17171], 00:10:37.497 | 70.00th=[17433], 80.00th=[18220], 90.00th=[19268], 95.00th=[20317], 00:10:37.497 | 99.00th=[24511], 99.50th=[25560], 99.90th=[26870], 99.95th=[27132], 00:10:37.497 | 99.99th=[27132] 00:10:37.497 bw ( KiB/s): min=14400, max=16384, per=34.65%, avg=15392.00, stdev=1402.90, samples=2 00:10:37.497 iops : min= 3600, max= 4096, avg=3848.00, stdev=350.72, samples=2 00:10:37.497 lat (msec) : 10=0.67%, 20=92.82%, 50=6.51% 00:10:37.497 cpu : usr=3.77%, sys=12.12%, ctx=361, majf=0, minf=9 00:10:37.497 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:37.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:37.498 issued rwts: total=3584,3975,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.498 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:37.498 job2: (groupid=0, jobs=1): err= 0: pid=68770: Mon Jun 10 08:05:59 2024 00:10:37.498 read: IOPS=1017, BW=4072KiB/s (4169kB/s)(4096KiB/1006msec) 00:10:37.498 slat (usec): min=15, max=27656, avg=501.03, stdev=2948.40 00:10:37.498 clat (msec): min=30, max=108, avg=63.48, stdev=18.16 00:10:37.498 lat (msec): min=36, max=108, avg=63.98, stdev=18.09 00:10:37.498 clat percentiles (msec): 00:10:37.498 | 1.00th=[ 37], 5.00th=[ 44], 10.00th=[ 46], 20.00th=[ 48], 00:10:37.498 | 30.00th=[ 56], 40.00th=[ 56], 50.00th=[ 58], 60.00th=[ 62], 00:10:37.498 | 70.00th=[ 66], 80.00th=[ 78], 90.00th=[ 99], 95.00th=[ 102], 00:10:37.498 | 99.00th=[ 109], 99.50th=[ 109], 99.90th=[ 109], 99.95th=[ 109], 00:10:37.498 | 99.99th=[ 109] 00:10:37.498 write: IOPS=1252, BW=5010KiB/s (5130kB/s)(5040KiB/1006msec); 0 zone resets 00:10:37.498 slat (usec): min=19, max=23799, avg=376.97, stdev=2126.29 00:10:37.498 clat (usec): min=5247, max=90488, avg=46651.68, stdev=15797.03 00:10:37.498 lat (usec): min=5283, max=90537, avg=47028.65, stdev=15708.18 00:10:37.498 clat percentiles (usec): 00:10:37.498 | 1.00th=[19268], 5.00th=[24773], 10.00th=[27395], 20.00th=[34341], 00:10:37.498 | 30.00th=[38011], 40.00th=[42206], 50.00th=[42730], 60.00th=[43779], 00:10:37.498 | 70.00th=[56361], 80.00th=[59507], 90.00th=[67634], 95.00th=[73925], 00:10:37.498 | 99.00th=[90702], 99.50th=[90702], 99.90th=[90702], 99.95th=[90702], 00:10:37.498 | 99.99th=[90702] 00:10:37.498 bw ( KiB/s): min= 4104, max= 4960, per=10.20%, avg=4532.00, stdev=605.28, samples=2 00:10:37.498 iops : min= 1026, max= 1240, avg=1133.00, stdev=151.32, samples=2 00:10:37.498 lat (msec) : 10=0.53%, 20=0.13%, 50=45.75%, 100=50.88%, 250=2.71% 00:10:37.498 cpu : usr=1.09%, sys=4.58%, ctx=73, majf=0, minf=15 00:10:37.498 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:10:37.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:37.498 issued rwts: total=1024,1260,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.498 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:37.498 job3: (groupid=0, jobs=1): err= 0: pid=68771: Mon Jun 10 08:05:59 2024 00:10:37.498 read: IOPS=1538, BW=6153KiB/s (6301kB/s)(6196KiB/1007msec) 00:10:37.498 slat (usec): min=12, max=38721, avg=323.74, stdev=2370.07 00:10:37.498 clat (usec): min=6389, max=80929, avg=44696.33, stdev=7931.69 00:10:37.498 lat (usec): min=6407, max=80971, avg=45020.08, stdev=8146.46 00:10:37.498 clat percentiles (usec): 00:10:37.498 | 1.00th=[29754], 5.00th=[36439], 10.00th=[38536], 20.00th=[39584], 00:10:37.498 | 30.00th=[40109], 40.00th=[41157], 50.00th=[42206], 60.00th=[43254], 00:10:37.498 | 70.00th=[50070], 80.00th=[54264], 90.00th=[55837], 95.00th=[56361], 00:10:37.498 | 99.00th=[57410], 99.50th=[59507], 99.90th=[78119], 99.95th=[81265], 00:10:37.498 | 99.99th=[81265] 00:10:37.498 write: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec); 0 zone resets 00:10:37.498 slat (usec): min=6, max=22088, avg=234.81, stdev=1586.89 00:10:37.498 clat (usec): min=6675, max=55263, avg=28483.54, stdev=4450.31 00:10:37.498 lat (usec): min=6745, max=55324, avg=28718.35, stdev=4230.19 00:10:37.498 clat percentiles (usec): 00:10:37.498 | 1.00th=[17957], 5.00th=[23200], 10.00th=[23987], 20.00th=[25035], 00:10:37.498 | 30.00th=[25822], 40.00th=[26608], 50.00th=[28181], 60.00th=[28967], 00:10:37.498 | 70.00th=[29492], 80.00th=[32113], 90.00th=[34866], 95.00th=[38536], 00:10:37.498 | 99.00th=[39584], 99.50th=[39584], 99.90th=[39584], 99.95th=[40109], 00:10:37.498 | 99.99th=[55313] 00:10:37.498 bw ( KiB/s): min= 7280, max= 8192, per=17.41%, avg=7736.00, stdev=644.88, samples=2 00:10:37.498 iops : min= 1820, max= 2048, avg=1934.00, stdev=161.22, samples=2 00:10:37.498 lat (msec) : 10=0.42%, 20=0.70%, 50=87.07%, 100=11.82% 00:10:37.498 cpu : usr=1.89%, sys=6.56%, ctx=75, majf=0, minf=7 00:10:37.498 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:10:37.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:37.498 issued rwts: total=1549,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.498 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:37.498 00:10:37.498 Run status group 0 (all jobs): 00:10:37.498 READ: bw=37.7MiB/s (39.6MB/s), 4072KiB/s-14.0MiB/s (4169kB/s-14.7MB/s), io=38.1MiB (39.9MB), run=1002-1008msec 00:10:37.498 WRITE: bw=43.4MiB/s (45.5MB/s), 5010KiB/s-15.4MiB/s (5130kB/s-16.2MB/s), io=43.7MiB (45.9MB), run=1002-1008msec 00:10:37.498 00:10:37.498 Disk stats (read/write): 00:10:37.498 nvme0n1: ios=3122/3406, merge=0/0, ticks=26051/23628, in_queue=49679, util=88.38% 00:10:37.498 nvme0n2: ios=3121/3399, merge=0/0, ticks=25530/24566, in_queue=50096, util=88.37% 00:10:37.498 nvme0n3: ios=960/1024, merge=0/0, ticks=15900/10689, in_queue=26589, util=89.04% 00:10:37.498 nvme0n4: ios=1402/1536, merge=0/0, ticks=61868/40532, in_queue=102400, util=89.59% 00:10:37.498 08:05:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:37.498 08:05:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68788 00:10:37.498 08:05:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:37.498 08:05:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:37.498 [global] 00:10:37.498 thread=1 00:10:37.498 invalidate=1 00:10:37.498 rw=read 00:10:37.498 time_based=1 00:10:37.498 runtime=10 00:10:37.498 ioengine=libaio 00:10:37.498 direct=1 00:10:37.498 bs=4096 00:10:37.498 iodepth=1 00:10:37.498 norandommap=1 00:10:37.498 numjobs=1 00:10:37.498 00:10:37.498 [job0] 00:10:37.498 filename=/dev/nvme0n1 00:10:37.498 [job1] 00:10:37.498 filename=/dev/nvme0n2 00:10:37.498 [job2] 00:10:37.498 filename=/dev/nvme0n3 00:10:37.498 [job3] 00:10:37.498 filename=/dev/nvme0n4 00:10:37.498 Could not set queue depth (nvme0n1) 00:10:37.498 Could not set queue depth (nvme0n2) 00:10:37.498 Could not set queue depth (nvme0n3) 00:10:37.498 Could not set queue depth (nvme0n4) 00:10:37.756 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.756 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.756 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.756 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.756 fio-3.35 00:10:37.756 Starting 4 threads 00:10:41.036 08:06:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:41.036 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=39772160, buflen=4096 00:10:41.036 fio: pid=68832, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:41.036 08:06:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:41.036 fio: pid=68831, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:41.036 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=39206912, buflen=4096 00:10:41.036 08:06:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:41.036 08:06:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:41.036 fio: pid=68829, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:41.036 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=41996288, buflen=4096 00:10:41.294 08:06:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:41.294 08:06:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:41.553 fio: pid=68830, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:41.553 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=19730432, buflen=4096 00:10:41.553 00:10:41.553 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68829: Mon Jun 10 08:06:03 2024 00:10:41.553 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(40.1MiB/3358msec) 00:10:41.553 slat (usec): min=8, max=12299, avg=21.45, stdev=213.80 00:10:41.553 clat (usec): min=120, max=2789, avg=304.42, stdev=75.26 00:10:41.553 lat (usec): min=132, max=12565, avg=325.87, stdev=226.40 00:10:41.553 clat percentiles (usec): 00:10:41.553 | 1.00th=[ 145], 5.00th=[ 198], 10.00th=[ 208], 20.00th=[ 249], 00:10:41.553 | 30.00th=[ 285], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 330], 00:10:41.553 | 70.00th=[ 338], 80.00th=[ 343], 90.00th=[ 355], 95.00th=[ 371], 00:10:41.553 | 99.00th=[ 424], 99.50th=[ 465], 99.90th=[ 783], 99.95th=[ 1450], 00:10:41.553 | 99.99th=[ 2671] 00:10:41.553 bw ( KiB/s): min=11304, max=12208, per=21.12%, avg=11660.00, stdev=406.34, samples=6 00:10:41.553 iops : min= 2826, max= 3052, avg=2915.00, stdev=101.59, samples=6 00:10:41.553 lat (usec) : 250=20.44%, 500=79.18%, 750=0.25%, 1000=0.05% 00:10:41.553 lat (msec) : 2=0.03%, 4=0.04% 00:10:41.553 cpu : usr=1.01%, sys=4.83%, ctx=10269, majf=0, minf=1 00:10:41.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.553 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.553 issued rwts: total=10254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.553 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68830: Mon Jun 10 08:06:03 2024 00:10:41.553 read: IOPS=5767, BW=22.5MiB/s (23.6MB/s)(82.8MiB/3676msec) 00:10:41.553 slat (usec): min=8, max=14198, avg=16.17, stdev=186.45 00:10:41.553 clat (usec): min=115, max=1965, avg=155.80, stdev=34.56 00:10:41.553 lat (usec): min=127, max=14942, avg=171.97, stdev=192.71 00:10:41.553 clat percentiles (usec): 00:10:41.553 | 1.00th=[ 127], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 139], 00:10:41.553 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 153], 00:10:41.553 | 70.00th=[ 157], 80.00th=[ 167], 90.00th=[ 196], 95.00th=[ 210], 00:10:41.553 | 99.00th=[ 235], 99.50th=[ 249], 99.90th=[ 363], 99.95th=[ 562], 00:10:41.553 | 99.99th=[ 1467] 00:10:41.553 bw ( KiB/s): min=20072, max=24936, per=41.98%, avg=23174.86, stdev=2183.51, samples=7 00:10:41.553 iops : min= 5018, max= 6234, avg=5793.71, stdev=545.88, samples=7 00:10:41.553 lat (usec) : 250=99.54%, 500=0.40%, 750=0.02%, 1000=0.01% 00:10:41.553 lat (msec) : 2=0.02% 00:10:41.553 cpu : usr=1.22%, sys=7.16%, ctx=21214, majf=0, minf=1 00:10:41.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.553 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.553 issued rwts: total=21202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.553 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68831: Mon Jun 10 08:06:03 2024 00:10:41.553 read: IOPS=3057, BW=11.9MiB/s (12.5MB/s)(37.4MiB/3131msec) 00:10:41.553 slat (usec): min=10, max=7839, avg=23.02, stdev=108.91 00:10:41.553 clat (usec): min=134, max=3312, avg=301.82, stdev=83.63 00:10:41.553 lat (usec): min=166, max=8025, avg=324.84, stdev=137.14 00:10:41.553 clat percentiles (usec): 00:10:41.553 | 1.00th=[ 159], 5.00th=[ 178], 10.00th=[ 204], 20.00th=[ 260], 00:10:41.553 | 30.00th=[ 293], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 322], 00:10:41.553 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 347], 95.00th=[ 359], 00:10:41.553 | 99.00th=[ 396], 99.50th=[ 474], 99.90th=[ 914], 99.95th=[ 2114], 00:10:41.553 | 99.99th=[ 3326] 00:10:41.553 bw ( KiB/s): min=11192, max=13328, per=21.65%, avg=11952.00, stdev=906.65, samples=6 00:10:41.553 iops : min= 2798, max= 3332, avg=2988.00, stdev=226.66, samples=6 00:10:41.553 lat (usec) : 250=16.65%, 500=82.93%, 750=0.25%, 1000=0.06% 00:10:41.553 lat (msec) : 2=0.04%, 4=0.05% 00:10:41.553 cpu : usr=1.53%, sys=5.65%, ctx=9587, majf=0, minf=1 00:10:41.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.553 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.553 issued rwts: total=9573,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.554 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68832: Mon Jun 10 08:06:03 2024 00:10:41.554 read: IOPS=3349, BW=13.1MiB/s (13.7MB/s)(37.9MiB/2899msec) 00:10:41.554 slat (usec): min=9, max=100, avg=14.38, stdev= 4.20 00:10:41.554 clat (usec): min=144, max=7687, avg=282.57, stdev=149.74 00:10:41.554 lat (usec): min=157, max=7700, avg=296.95, stdev=149.79 00:10:41.554 clat percentiles (usec): 00:10:41.554 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 182], 00:10:41.554 | 30.00th=[ 206], 40.00th=[ 306], 50.00th=[ 322], 60.00th=[ 330], 00:10:41.554 | 70.00th=[ 338], 80.00th=[ 343], 90.00th=[ 355], 95.00th=[ 363], 00:10:41.554 | 99.00th=[ 392], 99.50th=[ 441], 99.90th=[ 914], 99.95th=[ 3130], 00:10:41.554 | 99.99th=[ 7701] 00:10:41.554 bw ( KiB/s): min=11304, max=18792, per=24.91%, avg=13750.40, stdev=3444.73, samples=5 00:10:41.554 iops : min= 2826, max= 4698, avg=3437.60, stdev=861.18, samples=5 00:10:41.554 lat (usec) : 250=37.86%, 500=61.82%, 750=0.19%, 1000=0.04% 00:10:41.554 lat (msec) : 2=0.02%, 4=0.02%, 10=0.04% 00:10:41.554 cpu : usr=1.10%, sys=4.18%, ctx=9711, majf=0, minf=1 00:10:41.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.554 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.554 issued rwts: total=9711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.554 00:10:41.554 Run status group 0 (all jobs): 00:10:41.554 READ: bw=53.9MiB/s (56.5MB/s), 11.9MiB/s-22.5MiB/s (12.5MB/s-23.6MB/s), io=198MiB (208MB), run=2899-3676msec 00:10:41.554 00:10:41.554 Disk stats (read/write): 00:10:41.554 nvme0n1: ios=9207/0, merge=0/0, ticks=2929/0, in_queue=2929, util=95.34% 00:10:41.554 nvme0n2: ios=20810/0, merge=0/0, ticks=3310/0, in_queue=3310, util=95.15% 00:10:41.554 nvme0n3: ios=9492/0, merge=0/0, ticks=2948/0, in_queue=2948, util=96.36% 00:10:41.554 nvme0n4: ios=9620/0, merge=0/0, ticks=2645/0, in_queue=2645, util=96.48% 00:10:41.554 08:06:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:41.554 08:06:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:41.812 08:06:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:41.812 08:06:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:42.069 08:06:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:42.069 08:06:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:42.327 08:06:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:42.327 08:06:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:42.585 08:06:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:42.585 08:06:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:42.857 08:06:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:43.125 08:06:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 68788 00:10:43.125 08:06:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:43.125 08:06:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:43.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.125 08:06:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:43.125 08:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:10:43.125 08:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:43.125 08:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.125 08:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:43.125 08:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.125 nvmf hotplug test: fio failed as expected 00:10:43.125 08:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:10:43.126 08:06:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:43.126 08:06:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:43.126 08:06:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:43.385 rmmod nvme_tcp 00:10:43.385 rmmod nvme_fabrics 00:10:43.385 rmmod nvme_keyring 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 68400 ']' 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 68400 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 68400 ']' 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 68400 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 68400 00:10:43.385 killing process with pid 68400 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 68400' 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 68400 00:10:43.385 08:06:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 68400 00:10:43.645 08:06:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:43.645 08:06:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:43.645 08:06:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:43.645 08:06:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:43.645 08:06:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:43.645 08:06:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.645 08:06:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.645 08:06:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.645 08:06:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:43.645 00:10:43.645 real 0m19.541s 00:10:43.645 user 1m13.882s 00:10:43.645 sys 0m9.944s 00:10:43.645 08:06:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:43.645 08:06:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.645 ************************************ 00:10:43.645 END TEST nvmf_fio_target 00:10:43.645 ************************************ 00:10:43.645 08:06:05 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:43.645 08:06:05 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:43.645 08:06:05 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:43.645 08:06:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:43.645 ************************************ 00:10:43.645 START TEST nvmf_bdevio 00:10:43.645 ************************************ 00:10:43.645 08:06:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:43.905 * Looking for test storage... 00:10:43.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:43.905 Cannot find device "nvmf_tgt_br" 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:43.905 Cannot find device "nvmf_tgt_br2" 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:43.905 Cannot find device "nvmf_tgt_br" 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:43.905 Cannot find device "nvmf_tgt_br2" 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:43.905 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:43.905 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:43.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:43.906 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:43.906 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:43.906 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:43.906 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:43.906 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:43.906 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:43.906 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:44.163 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:44.163 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:44.163 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:44.163 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:44.163 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:44.163 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:44.163 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:44.163 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:44.163 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:44.163 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:44.163 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:44.163 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:44.163 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:44.163 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:44.163 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:44.163 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:44.163 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:44.163 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:44.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:10:44.163 00:10:44.163 --- 10.0.0.2 ping statistics --- 00:10:44.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.163 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:44.164 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:44.164 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:10:44.164 00:10:44.164 --- 10.0.0.3 ping statistics --- 00:10:44.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.164 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:44.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:10:44.164 00:10:44.164 --- 10.0.0.1 ping statistics --- 00:10:44.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.164 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=69097 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 69097 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 69097 ']' 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:44.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:44.164 08:06:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:44.164 [2024-06-10 08:06:05.984483] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:10:44.164 [2024-06-10 08:06:05.984623] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.423 [2024-06-10 08:06:06.129029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:44.423 [2024-06-10 08:06:06.256264] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.423 [2024-06-10 08:06:06.256369] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.423 [2024-06-10 08:06:06.256384] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.423 [2024-06-10 08:06:06.256394] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.423 [2024-06-10 08:06:06.256403] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.423 [2024-06-10 08:06:06.256612] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:10:44.423 [2024-06-10 08:06:06.256749] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:10:44.423 [2024-06-10 08:06:06.257444] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:10:44.423 [2024-06-10 08:06:06.257460] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.681 [2024-06-10 08:06:06.321259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:45.249 08:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:45.249 08:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:10:45.249 08:06:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:45.249 08:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:45.249 08:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.249 [2024-06-10 08:06:07.022270] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.249 Malloc0 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.249 [2024-06-10 08:06:07.096434] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:45.249 08:06:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:45.249 { 00:10:45.249 "params": { 00:10:45.249 "name": "Nvme$subsystem", 00:10:45.249 "trtype": "$TEST_TRANSPORT", 00:10:45.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:45.249 "adrfam": "ipv4", 00:10:45.249 "trsvcid": "$NVMF_PORT", 00:10:45.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:45.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:45.249 "hdgst": ${hdgst:-false}, 00:10:45.249 "ddgst": ${ddgst:-false} 00:10:45.249 }, 00:10:45.250 "method": "bdev_nvme_attach_controller" 00:10:45.250 } 00:10:45.250 EOF 00:10:45.250 )") 00:10:45.250 08:06:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:45.250 08:06:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:45.250 08:06:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:45.250 08:06:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:45.250 "params": { 00:10:45.250 "name": "Nvme1", 00:10:45.250 "trtype": "tcp", 00:10:45.250 "traddr": "10.0.0.2", 00:10:45.250 "adrfam": "ipv4", 00:10:45.250 "trsvcid": "4420", 00:10:45.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:45.250 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:45.250 "hdgst": false, 00:10:45.250 "ddgst": false 00:10:45.250 }, 00:10:45.250 "method": "bdev_nvme_attach_controller" 00:10:45.250 }' 00:10:45.509 [2024-06-10 08:06:07.159437] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:10:45.509 [2024-06-10 08:06:07.159581] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69139 ] 00:10:45.509 [2024-06-10 08:06:07.302047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:45.767 [2024-06-10 08:06:07.433880] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.767 [2024-06-10 08:06:07.434009] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.767 [2024-06-10 08:06:07.434276] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.767 [2024-06-10 08:06:07.514019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:46.027 I/O targets: 00:10:46.027 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:46.027 00:10:46.027 00:10:46.027 CUnit - A unit testing framework for C - Version 2.1-3 00:10:46.027 http://cunit.sourceforge.net/ 00:10:46.027 00:10:46.027 00:10:46.027 Suite: bdevio tests on: Nvme1n1 00:10:46.027 Test: blockdev write read block ...passed 00:10:46.027 Test: blockdev write zeroes read block ...passed 00:10:46.027 Test: blockdev write zeroes read no split ...passed 00:10:46.027 Test: blockdev write zeroes read split ...passed 00:10:46.027 Test: blockdev write zeroes read split partial ...passed 00:10:46.027 Test: blockdev reset ...[2024-06-10 08:06:07.667250] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:46.027 [2024-06-10 08:06:07.667383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa613d0 (9): Bad file descriptor 00:10:46.027 [2024-06-10 08:06:07.682818] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:46.027 passed 00:10:46.027 Test: blockdev write read 8 blocks ...passed 00:10:46.027 Test: blockdev write read size > 128k ...passed 00:10:46.027 Test: blockdev write read invalid size ...passed 00:10:46.027 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:46.027 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:46.027 Test: blockdev write read max offset ...passed 00:10:46.027 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:46.027 Test: blockdev writev readv 8 blocks ...passed 00:10:46.027 Test: blockdev writev readv 30 x 1block ...passed 00:10:46.027 Test: blockdev writev readv block ...passed 00:10:46.027 Test: blockdev writev readv size > 128k ...passed 00:10:46.027 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:46.027 Test: blockdev comparev and writev ...[2024-06-10 08:06:07.690184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:46.027 [2024-06-10 08:06:07.690224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:46.027 [2024-06-10 08:06:07.690244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:46.027 [2024-06-10 08:06:07.690256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:46.027 [2024-06-10 08:06:07.690712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:46.027 [2024-06-10 08:06:07.690741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:46.027 [2024-06-10 08:06:07.690759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:46.027 [2024-06-10 08:06:07.690769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:46.027 [2024-06-10 08:06:07.691172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:46.027 [2024-06-10 08:06:07.691202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:46.027 [2024-06-10 08:06:07.691220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:46.027 [2024-06-10 08:06:07.691231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:46.027 [2024-06-10 08:06:07.691618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:46.027 [2024-06-10 08:06:07.691646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:46.028 [2024-06-10 08:06:07.691664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:46.028 [2024-06-10 08:06:07.691674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:46.028 passed 00:10:46.028 Test: blockdev nvme passthru rw ...passed 00:10:46.028 Test: blockdev nvme passthru vendor specific ...[2024-06-10 08:06:07.692527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:46.028 [2024-06-10 08:06:07.692551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:46.028 [2024-06-10 08:06:07.692656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:46.028 [2024-06-10 08:06:07.692680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:46.028 [2024-06-10 08:06:07.692796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:46.028 [2024-06-10 08:06:07.692823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:46.028 [2024-06-10 08:06:07.692923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:46.028 [2024-06-10 08:06:07.692944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:46.028 passed 00:10:46.028 Test: blockdev nvme admin passthru ...passed 00:10:46.028 Test: blockdev copy ...passed 00:10:46.028 00:10:46.028 Run Summary: Type Total Ran Passed Failed Inactive 00:10:46.028 suites 1 1 n/a 0 0 00:10:46.028 tests 23 23 23 0 0 00:10:46.028 asserts 152 152 152 0 n/a 00:10:46.028 00:10:46.028 Elapsed time = 0.153 seconds 00:10:46.287 08:06:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.287 08:06:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:46.287 08:06:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.287 08:06:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:46.287 08:06:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:46.287 08:06:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:46.287 08:06:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:46.287 08:06:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:46.287 08:06:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:46.287 08:06:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:46.287 08:06:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:46.287 08:06:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:46.287 rmmod nvme_tcp 00:10:46.287 rmmod nvme_fabrics 00:10:46.287 rmmod nvme_keyring 00:10:46.287 08:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:46.287 08:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:46.287 08:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:46.287 08:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 69097 ']' 00:10:46.287 08:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 69097 00:10:46.287 08:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 69097 ']' 00:10:46.287 08:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 69097 00:10:46.287 08:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:10:46.287 08:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:46.287 08:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 69097 00:10:46.287 08:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:10:46.287 08:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:10:46.287 killing process with pid 69097 00:10:46.287 08:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 69097' 00:10:46.287 08:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 69097 00:10:46.287 08:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 69097 00:10:46.546 08:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:46.546 08:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:46.546 08:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:46.546 08:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:46.546 08:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:46.546 08:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.546 08:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.546 08:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.546 08:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:46.546 ************************************ 00:10:46.546 END TEST nvmf_bdevio 00:10:46.546 ************************************ 00:10:46.546 00:10:46.546 real 0m2.937s 00:10:46.546 user 0m9.791s 00:10:46.546 sys 0m0.821s 00:10:46.546 08:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:46.546 08:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.805 08:06:08 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:46.805 08:06:08 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:46.805 08:06:08 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:46.805 08:06:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:46.805 ************************************ 00:10:46.805 START TEST nvmf_auth_target 00:10:46.805 ************************************ 00:10:46.805 08:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:46.805 * Looking for test storage... 00:10:46.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:46.805 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:46.805 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:46.805 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.805 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.805 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.805 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.805 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.805 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.805 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.805 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.805 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.805 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.805 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:10:46.805 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:10:46.805 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.805 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:46.806 Cannot find device "nvmf_tgt_br" 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:46.806 Cannot find device "nvmf_tgt_br2" 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:46.806 Cannot find device "nvmf_tgt_br" 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:46.806 Cannot find device "nvmf_tgt_br2" 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:10:46.806 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:47.065 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:47.065 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:47.065 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:47.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:10:47.066 00:10:47.066 --- 10.0.0.2 ping statistics --- 00:10:47.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.066 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:47.066 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:47.066 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:10:47.066 00:10:47.066 --- 10.0.0.3 ping statistics --- 00:10:47.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.066 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:47.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:10:47.066 00:10:47.066 --- 10.0.0.1 ping statistics --- 00:10:47.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.066 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:47.066 08:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.324 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=69307 00:10:47.324 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 69307 00:10:47.324 08:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 69307 ']' 00:10:47.324 08:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.324 08:06:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:47.324 08:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:47.324 08:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.325 08:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:47.325 08:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.262 08:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:48.262 08:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:10:48.262 08:06:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:48.262 08:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:48.262 08:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=69344 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e24dd300afc545494b577859017cc0d9cb73bcf9f0c8e247 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Evz 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e24dd300afc545494b577859017cc0d9cb73bcf9f0c8e247 0 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e24dd300afc545494b577859017cc0d9cb73bcf9f0c8e247 0 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e24dd300afc545494b577859017cc0d9cb73bcf9f0c8e247 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Evz 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Evz 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Evz 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=774cb64c33a81860edf808848ace5341a3454ce9aa93bfdac3a7cb69e2135602 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.yUn 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 774cb64c33a81860edf808848ace5341a3454ce9aa93bfdac3a7cb69e2135602 3 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 774cb64c33a81860edf808848ace5341a3454ce9aa93bfdac3a7cb69e2135602 3 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=774cb64c33a81860edf808848ace5341a3454ce9aa93bfdac3a7cb69e2135602 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:48.262 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.yUn 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.yUn 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.yUn 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=15355bd283c88217cd972789a9d059a4 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.nMi 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 15355bd283c88217cd972789a9d059a4 1 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 15355bd283c88217cd972789a9d059a4 1 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=15355bd283c88217cd972789a9d059a4 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.nMi 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.nMi 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.nMi 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=94369b7ca8e10e30d6d9d4281a7dfdb6e8a63f566a70f93d 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.IW0 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 94369b7ca8e10e30d6d9d4281a7dfdb6e8a63f566a70f93d 2 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 94369b7ca8e10e30d6d9d4281a7dfdb6e8a63f566a70f93d 2 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=94369b7ca8e10e30d6d9d4281a7dfdb6e8a63f566a70f93d 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.IW0 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.IW0 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.IW0 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7e04e1dbf9132c890da31f123876307c35880735318abc47 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.HV3 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7e04e1dbf9132c890da31f123876307c35880735318abc47 2 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7e04e1dbf9132c890da31f123876307c35880735318abc47 2 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7e04e1dbf9132c890da31f123876307c35880735318abc47 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.HV3 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.HV3 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.HV3 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=53ec2c67cf233ed631b792e40cdb6566 00:10:48.522 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Gqp 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 53ec2c67cf233ed631b792e40cdb6566 1 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 53ec2c67cf233ed631b792e40cdb6566 1 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=53ec2c67cf233ed631b792e40cdb6566 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Gqp 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Gqp 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Gqp 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1f106154053206ef8f0398169818e545e353cb71c2f8c8e314a3c389f1136fbe 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.FTM 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1f106154053206ef8f0398169818e545e353cb71c2f8c8e314a3c389f1136fbe 3 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1f106154053206ef8f0398169818e545e353cb71c2f8c8e314a3c389f1136fbe 3 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1f106154053206ef8f0398169818e545e353cb71c2f8c8e314a3c389f1136fbe 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.FTM 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.FTM 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.FTM 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 69307 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 69307 ']' 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:48.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:48.781 08:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.040 08:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:49.040 08:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:10:49.040 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 69344 /var/tmp/host.sock 00:10:49.040 08:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 69344 ']' 00:10:49.040 08:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:10:49.040 08:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:49.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:49.040 08:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:49.040 08:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:49.040 08:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.298 08:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:49.298 08:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:10:49.298 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:49.298 08:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.298 08:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.298 08:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.298 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:49.298 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Evz 00:10:49.298 08:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.298 08:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.298 08:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.298 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Evz 00:10:49.298 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Evz 00:10:49.557 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.yUn ]] 00:10:49.557 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yUn 00:10:49.557 08:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.557 08:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.557 08:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.557 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yUn 00:10:49.557 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yUn 00:10:49.816 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:49.816 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.nMi 00:10:49.816 08:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.816 08:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.816 08:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.816 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.nMi 00:10:49.816 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.nMi 00:10:50.075 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.IW0 ]] 00:10:50.075 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IW0 00:10:50.075 08:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.075 08:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.075 08:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.075 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IW0 00:10:50.075 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IW0 00:10:50.334 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:50.334 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.HV3 00:10:50.334 08:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.334 08:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.334 08:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.334 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.HV3 00:10:50.334 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.HV3 00:10:50.593 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Gqp ]] 00:10:50.593 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Gqp 00:10:50.593 08:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.593 08:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.593 08:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.593 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Gqp 00:10:50.593 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Gqp 00:10:50.851 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:50.851 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.FTM 00:10:50.851 08:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.851 08:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.851 08:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.851 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.FTM 00:10:50.851 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.FTM 00:10:51.110 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:51.110 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:51.110 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:51.110 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:51.110 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:51.110 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:51.369 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:51.369 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:51.369 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:51.369 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:51.369 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:51.369 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.369 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:51.369 08:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.369 08:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.369 08:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.369 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:51.369 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:51.628 00:10:51.628 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:51.628 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.628 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:51.887 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.887 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.887 08:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.887 08:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.887 08:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.887 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:51.887 { 00:10:51.887 "cntlid": 1, 00:10:51.887 "qid": 0, 00:10:51.887 "state": "enabled", 00:10:51.887 "listen_address": { 00:10:51.887 "trtype": "TCP", 00:10:51.887 "adrfam": "IPv4", 00:10:51.887 "traddr": "10.0.0.2", 00:10:51.887 "trsvcid": "4420" 00:10:51.887 }, 00:10:51.887 "peer_address": { 00:10:51.887 "trtype": "TCP", 00:10:51.887 "adrfam": "IPv4", 00:10:51.887 "traddr": "10.0.0.1", 00:10:51.887 "trsvcid": "42170" 00:10:51.887 }, 00:10:51.887 "auth": { 00:10:51.887 "state": "completed", 00:10:51.887 "digest": "sha256", 00:10:51.887 "dhgroup": "null" 00:10:51.887 } 00:10:51.887 } 00:10:51.887 ]' 00:10:51.887 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:51.887 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:51.887 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:52.145 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:52.145 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:52.145 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.145 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.145 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.404 08:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:00:ZTI0ZGQzMDBhZmM1NDU0OTRiNTc3ODU5MDE3Y2MwZDljYjczYmNmOWYwYzhlMjQ3k5VyGw==: --dhchap-ctrl-secret DHHC-1:03:Nzc0Y2I2NGMzM2E4MTg2MGVkZjgwODg0OGFjZTUzNDFhMzQ1NGNlOWFhOTNiZmRhYzNhN2NiNjllMjEzNTYwMn6RTFs=: 00:10:56.593 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.593 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:10:56.593 08:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:56.593 08:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.593 08:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:56.593 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:56.593 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:56.593 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:56.593 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:10:56.593 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:56.593 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:56.593 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:56.593 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:56.593 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.593 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:56.593 08:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:56.593 08:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.593 08:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:56.593 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:56.593 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.160 00:10:57.160 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:57.160 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.160 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:57.418 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.418 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.418 08:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.418 08:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.418 08:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.418 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:57.418 { 00:10:57.418 "cntlid": 3, 00:10:57.418 "qid": 0, 00:10:57.418 "state": "enabled", 00:10:57.418 "listen_address": { 00:10:57.418 "trtype": "TCP", 00:10:57.418 "adrfam": "IPv4", 00:10:57.418 "traddr": "10.0.0.2", 00:10:57.418 "trsvcid": "4420" 00:10:57.418 }, 00:10:57.418 "peer_address": { 00:10:57.418 "trtype": "TCP", 00:10:57.418 "adrfam": "IPv4", 00:10:57.418 "traddr": "10.0.0.1", 00:10:57.418 "trsvcid": "42200" 00:10:57.418 }, 00:10:57.418 "auth": { 00:10:57.418 "state": "completed", 00:10:57.418 "digest": "sha256", 00:10:57.418 "dhgroup": "null" 00:10:57.418 } 00:10:57.418 } 00:10:57.418 ]' 00:10:57.418 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:57.418 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:57.418 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:57.418 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:57.418 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:57.418 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.418 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.418 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.677 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:01:MTUzNTViZDI4M2M4ODIxN2NkOTcyNzg5YTlkMDU5YTQcxddQ: --dhchap-ctrl-secret DHHC-1:02:OTQzNjliN2NhOGUxMGUzMGQ2ZDlkNDI4MWE3ZGZkYjZlOGE2M2Y1NjZhNzBmOTNkFzMkSg==: 00:10:58.612 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.612 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:10:58.612 08:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.612 08:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.612 08:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.612 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:58.612 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:58.612 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:58.870 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:10:58.870 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:58.871 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:58.871 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:58.871 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:58.871 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.871 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.871 08:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.871 08:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.871 08:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.871 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.871 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:59.130 00:10:59.130 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:59.130 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:59.130 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:59.388 08:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.388 08:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.388 08:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:59.388 08:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.388 08:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:59.388 08:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:59.388 { 00:10:59.388 "cntlid": 5, 00:10:59.388 "qid": 0, 00:10:59.388 "state": "enabled", 00:10:59.388 "listen_address": { 00:10:59.388 "trtype": "TCP", 00:10:59.388 "adrfam": "IPv4", 00:10:59.388 "traddr": "10.0.0.2", 00:10:59.388 "trsvcid": "4420" 00:10:59.388 }, 00:10:59.388 "peer_address": { 00:10:59.388 "trtype": "TCP", 00:10:59.388 "adrfam": "IPv4", 00:10:59.388 "traddr": "10.0.0.1", 00:10:59.388 "trsvcid": "53036" 00:10:59.388 }, 00:10:59.388 "auth": { 00:10:59.388 "state": "completed", 00:10:59.388 "digest": "sha256", 00:10:59.388 "dhgroup": "null" 00:10:59.388 } 00:10:59.388 } 00:10:59.388 ]' 00:10:59.388 08:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:59.388 08:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:59.388 08:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:59.388 08:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:59.388 08:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:59.647 08:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.647 08:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.647 08:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.647 08:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:02:N2UwNGUxZGJmOTEzMmM4OTBkYTMxZjEyMzg3NjMwN2MzNTg4MDczNTMxOGFiYzQ3tnOQRw==: --dhchap-ctrl-secret DHHC-1:01:NTNlYzJjNjdjZjIzM2VkNjMxYjc5MmU0MGNkYjY1NjZ5rYSD: 00:11:00.583 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.583 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:00.583 08:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:00.583 08:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.583 08:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:00.583 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:00.583 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:00.583 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:00.583 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:11:00.583 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:00.583 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:00.583 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:00.583 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:00.583 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.583 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key3 00:11:00.583 08:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:00.583 08:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.583 08:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:00.583 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:00.583 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:01.173 00:11:01.173 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:01.173 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:01.173 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.432 08:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.432 08:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.432 08:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:01.432 08:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.432 08:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:01.432 08:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:01.432 { 00:11:01.432 "cntlid": 7, 00:11:01.432 "qid": 0, 00:11:01.432 "state": "enabled", 00:11:01.432 "listen_address": { 00:11:01.432 "trtype": "TCP", 00:11:01.432 "adrfam": "IPv4", 00:11:01.432 "traddr": "10.0.0.2", 00:11:01.432 "trsvcid": "4420" 00:11:01.432 }, 00:11:01.432 "peer_address": { 00:11:01.432 "trtype": "TCP", 00:11:01.432 "adrfam": "IPv4", 00:11:01.432 "traddr": "10.0.0.1", 00:11:01.432 "trsvcid": "53060" 00:11:01.432 }, 00:11:01.432 "auth": { 00:11:01.432 "state": "completed", 00:11:01.432 "digest": "sha256", 00:11:01.432 "dhgroup": "null" 00:11:01.432 } 00:11:01.432 } 00:11:01.432 ]' 00:11:01.432 08:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:01.432 08:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:01.432 08:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:01.432 08:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:01.432 08:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:01.432 08:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.432 08:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.432 08:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.691 08:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:03:MWYxMDYxNTQwNTMyMDZlZjhmMDM5ODE2OTgxOGU1NDVlMzUzY2I3MWMyZjhjOGUzMTRhM2MzODlmMTEzNmZiZda4AhE=: 00:11:02.627 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.627 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:02.627 08:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:02.627 08:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.627 08:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:02.627 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:02.627 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:02.627 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:02.627 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:02.885 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:11:02.885 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:02.885 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:02.885 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:02.885 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:02.885 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.885 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.885 08:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:02.885 08:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.885 08:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:02.885 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.885 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:03.143 00:11:03.143 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:03.143 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:03.143 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.402 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.402 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.402 08:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:03.402 08:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.402 08:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:03.402 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:03.402 { 00:11:03.402 "cntlid": 9, 00:11:03.402 "qid": 0, 00:11:03.402 "state": "enabled", 00:11:03.402 "listen_address": { 00:11:03.402 "trtype": "TCP", 00:11:03.402 "adrfam": "IPv4", 00:11:03.402 "traddr": "10.0.0.2", 00:11:03.402 "trsvcid": "4420" 00:11:03.402 }, 00:11:03.402 "peer_address": { 00:11:03.402 "trtype": "TCP", 00:11:03.402 "adrfam": "IPv4", 00:11:03.402 "traddr": "10.0.0.1", 00:11:03.402 "trsvcid": "53082" 00:11:03.402 }, 00:11:03.402 "auth": { 00:11:03.402 "state": "completed", 00:11:03.402 "digest": "sha256", 00:11:03.402 "dhgroup": "ffdhe2048" 00:11:03.402 } 00:11:03.402 } 00:11:03.402 ]' 00:11:03.402 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:03.402 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:03.402 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:03.402 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:03.402 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:03.660 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.661 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.661 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.919 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:00:ZTI0ZGQzMDBhZmM1NDU0OTRiNTc3ODU5MDE3Y2MwZDljYjczYmNmOWYwYzhlMjQ3k5VyGw==: --dhchap-ctrl-secret DHHC-1:03:Nzc0Y2I2NGMzM2E4MTg2MGVkZjgwODg0OGFjZTUzNDFhMzQ1NGNlOWFhOTNiZmRhYzNhN2NiNjllMjEzNTYwMn6RTFs=: 00:11:04.487 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.487 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:04.487 08:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:04.487 08:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.487 08:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:04.487 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:04.487 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:04.487 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:04.746 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:11:04.746 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:04.746 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:04.746 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:04.746 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:04.746 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.746 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.746 08:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:04.746 08:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.746 08:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:04.746 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.746 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:05.313 00:11:05.313 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:05.313 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:05.313 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.313 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.313 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.313 08:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:05.313 08:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.313 08:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:05.313 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:05.313 { 00:11:05.313 "cntlid": 11, 00:11:05.313 "qid": 0, 00:11:05.313 "state": "enabled", 00:11:05.313 "listen_address": { 00:11:05.313 "trtype": "TCP", 00:11:05.313 "adrfam": "IPv4", 00:11:05.313 "traddr": "10.0.0.2", 00:11:05.313 "trsvcid": "4420" 00:11:05.313 }, 00:11:05.313 "peer_address": { 00:11:05.313 "trtype": "TCP", 00:11:05.313 "adrfam": "IPv4", 00:11:05.313 "traddr": "10.0.0.1", 00:11:05.313 "trsvcid": "53100" 00:11:05.313 }, 00:11:05.313 "auth": { 00:11:05.313 "state": "completed", 00:11:05.313 "digest": "sha256", 00:11:05.313 "dhgroup": "ffdhe2048" 00:11:05.313 } 00:11:05.313 } 00:11:05.313 ]' 00:11:05.313 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:05.571 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:05.571 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:05.571 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:05.571 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:05.571 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.571 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.571 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.830 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:01:MTUzNTViZDI4M2M4ODIxN2NkOTcyNzg5YTlkMDU5YTQcxddQ: --dhchap-ctrl-secret DHHC-1:02:OTQzNjliN2NhOGUxMGUzMGQ2ZDlkNDI4MWE3ZGZkYjZlOGE2M2Y1NjZhNzBmOTNkFzMkSg==: 00:11:06.766 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.766 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:06.766 08:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:06.766 08:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.766 08:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:06.766 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:06.766 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:06.766 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:06.766 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:11:06.766 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:06.766 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:06.766 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:06.766 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:06.766 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.766 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.766 08:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:06.766 08:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.766 08:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:06.766 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.766 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:07.333 00:11:07.333 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:07.333 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:07.333 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.591 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.591 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.591 08:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.591 08:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.591 08:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.591 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:07.591 { 00:11:07.591 "cntlid": 13, 00:11:07.591 "qid": 0, 00:11:07.591 "state": "enabled", 00:11:07.591 "listen_address": { 00:11:07.591 "trtype": "TCP", 00:11:07.591 "adrfam": "IPv4", 00:11:07.591 "traddr": "10.0.0.2", 00:11:07.591 "trsvcid": "4420" 00:11:07.591 }, 00:11:07.591 "peer_address": { 00:11:07.591 "trtype": "TCP", 00:11:07.591 "adrfam": "IPv4", 00:11:07.591 "traddr": "10.0.0.1", 00:11:07.591 "trsvcid": "53126" 00:11:07.591 }, 00:11:07.591 "auth": { 00:11:07.591 "state": "completed", 00:11:07.591 "digest": "sha256", 00:11:07.591 "dhgroup": "ffdhe2048" 00:11:07.591 } 00:11:07.591 } 00:11:07.591 ]' 00:11:07.591 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:07.591 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:07.591 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:07.591 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:07.591 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:07.591 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.591 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.591 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.155 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:02:N2UwNGUxZGJmOTEzMmM4OTBkYTMxZjEyMzg3NjMwN2MzNTg4MDczNTMxOGFiYzQ3tnOQRw==: --dhchap-ctrl-secret DHHC-1:01:NTNlYzJjNjdjZjIzM2VkNjMxYjc5MmU0MGNkYjY1NjZ5rYSD: 00:11:08.721 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.721 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:08.721 08:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:08.721 08:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.721 08:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:08.721 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:08.721 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:08.721 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:08.980 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:11:08.980 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:08.980 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:08.980 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:08.980 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:08.980 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.980 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key3 00:11:08.980 08:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:08.980 08:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.980 08:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:08.980 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:08.980 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:09.239 00:11:09.239 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:09.239 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:09.239 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.497 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.497 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.497 08:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:09.497 08:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.497 08:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:09.497 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:09.497 { 00:11:09.497 "cntlid": 15, 00:11:09.497 "qid": 0, 00:11:09.497 "state": "enabled", 00:11:09.497 "listen_address": { 00:11:09.497 "trtype": "TCP", 00:11:09.497 "adrfam": "IPv4", 00:11:09.497 "traddr": "10.0.0.2", 00:11:09.497 "trsvcid": "4420" 00:11:09.497 }, 00:11:09.497 "peer_address": { 00:11:09.497 "trtype": "TCP", 00:11:09.497 "adrfam": "IPv4", 00:11:09.497 "traddr": "10.0.0.1", 00:11:09.497 "trsvcid": "59742" 00:11:09.497 }, 00:11:09.497 "auth": { 00:11:09.497 "state": "completed", 00:11:09.497 "digest": "sha256", 00:11:09.497 "dhgroup": "ffdhe2048" 00:11:09.497 } 00:11:09.497 } 00:11:09.497 ]' 00:11:09.497 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:09.756 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:09.756 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:09.756 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:09.756 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:09.756 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.756 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.756 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.014 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:03:MWYxMDYxNTQwNTMyMDZlZjhmMDM5ODE2OTgxOGU1NDVlMzUzY2I3MWMyZjhjOGUzMTRhM2MzODlmMTEzNmZiZda4AhE=: 00:11:10.581 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.581 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:10.581 08:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:10.581 08:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.581 08:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:10.581 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:10.581 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:10.581 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:10.581 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:11.149 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:11:11.149 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:11.149 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:11.149 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:11.149 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:11.149 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.149 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:11.149 08:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:11.149 08:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.149 08:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:11.149 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:11.149 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:11.407 00:11:11.407 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:11.407 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:11.407 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.672 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.672 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.672 08:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:11.672 08:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.672 08:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:11.672 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:11.672 { 00:11:11.672 "cntlid": 17, 00:11:11.672 "qid": 0, 00:11:11.672 "state": "enabled", 00:11:11.672 "listen_address": { 00:11:11.672 "trtype": "TCP", 00:11:11.672 "adrfam": "IPv4", 00:11:11.672 "traddr": "10.0.0.2", 00:11:11.672 "trsvcid": "4420" 00:11:11.672 }, 00:11:11.672 "peer_address": { 00:11:11.672 "trtype": "TCP", 00:11:11.672 "adrfam": "IPv4", 00:11:11.672 "traddr": "10.0.0.1", 00:11:11.672 "trsvcid": "59760" 00:11:11.672 }, 00:11:11.672 "auth": { 00:11:11.672 "state": "completed", 00:11:11.672 "digest": "sha256", 00:11:11.672 "dhgroup": "ffdhe3072" 00:11:11.672 } 00:11:11.672 } 00:11:11.672 ]' 00:11:11.672 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:11.672 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:11.672 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:11.672 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:11.672 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:11.948 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.948 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.948 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.948 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:00:ZTI0ZGQzMDBhZmM1NDU0OTRiNTc3ODU5MDE3Y2MwZDljYjczYmNmOWYwYzhlMjQ3k5VyGw==: --dhchap-ctrl-secret DHHC-1:03:Nzc0Y2I2NGMzM2E4MTg2MGVkZjgwODg0OGFjZTUzNDFhMzQ1NGNlOWFhOTNiZmRhYzNhN2NiNjllMjEzNTYwMn6RTFs=: 00:11:12.884 08:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.884 08:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:12.884 08:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:12.884 08:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.884 08:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:12.884 08:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:12.884 08:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:12.884 08:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:13.206 08:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:11:13.206 08:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:13.206 08:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:13.206 08:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:13.206 08:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:13.206 08:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.206 08:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.206 08:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:13.206 08:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.206 08:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:13.206 08:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.206 08:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.465 00:11:13.465 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:13.465 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.465 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:13.724 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.724 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.724 08:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:13.724 08:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.724 08:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:13.724 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:13.724 { 00:11:13.724 "cntlid": 19, 00:11:13.724 "qid": 0, 00:11:13.724 "state": "enabled", 00:11:13.724 "listen_address": { 00:11:13.724 "trtype": "TCP", 00:11:13.724 "adrfam": "IPv4", 00:11:13.724 "traddr": "10.0.0.2", 00:11:13.724 "trsvcid": "4420" 00:11:13.724 }, 00:11:13.724 "peer_address": { 00:11:13.724 "trtype": "TCP", 00:11:13.724 "adrfam": "IPv4", 00:11:13.724 "traddr": "10.0.0.1", 00:11:13.724 "trsvcid": "59792" 00:11:13.724 }, 00:11:13.724 "auth": { 00:11:13.724 "state": "completed", 00:11:13.724 "digest": "sha256", 00:11:13.724 "dhgroup": "ffdhe3072" 00:11:13.724 } 00:11:13.724 } 00:11:13.724 ]' 00:11:13.724 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:13.724 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:13.724 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:13.982 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:13.982 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:13.982 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.982 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.982 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.241 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:01:MTUzNTViZDI4M2M4ODIxN2NkOTcyNzg5YTlkMDU5YTQcxddQ: --dhchap-ctrl-secret DHHC-1:02:OTQzNjliN2NhOGUxMGUzMGQ2ZDlkNDI4MWE3ZGZkYjZlOGE2M2Y1NjZhNzBmOTNkFzMkSg==: 00:11:14.809 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.810 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:14.810 08:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:14.810 08:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.810 08:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:14.810 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:14.810 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:14.810 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:15.377 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:11:15.377 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:15.377 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:15.377 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:15.377 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:15.377 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.377 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.377 08:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:15.377 08:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.377 08:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:15.377 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.377 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.636 00:11:15.636 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:15.636 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:15.636 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.894 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.894 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.894 08:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:15.894 08:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.894 08:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:15.894 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:15.894 { 00:11:15.894 "cntlid": 21, 00:11:15.894 "qid": 0, 00:11:15.894 "state": "enabled", 00:11:15.894 "listen_address": { 00:11:15.894 "trtype": "TCP", 00:11:15.894 "adrfam": "IPv4", 00:11:15.894 "traddr": "10.0.0.2", 00:11:15.894 "trsvcid": "4420" 00:11:15.894 }, 00:11:15.894 "peer_address": { 00:11:15.894 "trtype": "TCP", 00:11:15.894 "adrfam": "IPv4", 00:11:15.894 "traddr": "10.0.0.1", 00:11:15.894 "trsvcid": "59812" 00:11:15.894 }, 00:11:15.894 "auth": { 00:11:15.894 "state": "completed", 00:11:15.894 "digest": "sha256", 00:11:15.894 "dhgroup": "ffdhe3072" 00:11:15.894 } 00:11:15.894 } 00:11:15.894 ]' 00:11:15.894 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:15.894 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:15.894 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:16.152 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:16.152 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:16.152 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.152 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.152 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.411 08:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:02:N2UwNGUxZGJmOTEzMmM4OTBkYTMxZjEyMzg3NjMwN2MzNTg4MDczNTMxOGFiYzQ3tnOQRw==: --dhchap-ctrl-secret DHHC-1:01:NTNlYzJjNjdjZjIzM2VkNjMxYjc5MmU0MGNkYjY1NjZ5rYSD: 00:11:16.978 08:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.978 08:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:16.978 08:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:16.978 08:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.978 08:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:16.978 08:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:16.978 08:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:16.978 08:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:17.238 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:11:17.238 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:17.238 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:17.238 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:17.238 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:17.238 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.238 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key3 00:11:17.238 08:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:17.238 08:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.238 08:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:17.238 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:17.238 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:17.805 00:11:17.805 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:17.805 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:17.805 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.065 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.065 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.065 08:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:18.065 08:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.065 08:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:18.065 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:18.065 { 00:11:18.065 "cntlid": 23, 00:11:18.065 "qid": 0, 00:11:18.065 "state": "enabled", 00:11:18.065 "listen_address": { 00:11:18.065 "trtype": "TCP", 00:11:18.065 "adrfam": "IPv4", 00:11:18.065 "traddr": "10.0.0.2", 00:11:18.065 "trsvcid": "4420" 00:11:18.065 }, 00:11:18.065 "peer_address": { 00:11:18.065 "trtype": "TCP", 00:11:18.065 "adrfam": "IPv4", 00:11:18.065 "traddr": "10.0.0.1", 00:11:18.065 "trsvcid": "56048" 00:11:18.065 }, 00:11:18.065 "auth": { 00:11:18.065 "state": "completed", 00:11:18.065 "digest": "sha256", 00:11:18.065 "dhgroup": "ffdhe3072" 00:11:18.065 } 00:11:18.065 } 00:11:18.065 ]' 00:11:18.065 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:18.065 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:18.065 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:18.065 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:18.065 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:18.065 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.065 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.065 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.323 08:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:03:MWYxMDYxNTQwNTMyMDZlZjhmMDM5ODE2OTgxOGU1NDVlMzUzY2I3MWMyZjhjOGUzMTRhM2MzODlmMTEzNmZiZda4AhE=: 00:11:19.257 08:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.257 08:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:19.257 08:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:19.257 08:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.257 08:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:19.257 08:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:19.257 08:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:19.257 08:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:19.257 08:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:19.516 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:11:19.516 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:19.516 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:19.516 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:19.516 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:19.516 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.516 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.516 08:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:19.516 08:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.516 08:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:19.516 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.516 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.775 00:11:19.775 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:19.775 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:19.775 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.034 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.034 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.034 08:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:20.034 08:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.034 08:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:20.034 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:20.034 { 00:11:20.034 "cntlid": 25, 00:11:20.034 "qid": 0, 00:11:20.034 "state": "enabled", 00:11:20.034 "listen_address": { 00:11:20.034 "trtype": "TCP", 00:11:20.034 "adrfam": "IPv4", 00:11:20.034 "traddr": "10.0.0.2", 00:11:20.034 "trsvcid": "4420" 00:11:20.034 }, 00:11:20.034 "peer_address": { 00:11:20.034 "trtype": "TCP", 00:11:20.034 "adrfam": "IPv4", 00:11:20.034 "traddr": "10.0.0.1", 00:11:20.034 "trsvcid": "56070" 00:11:20.034 }, 00:11:20.034 "auth": { 00:11:20.034 "state": "completed", 00:11:20.034 "digest": "sha256", 00:11:20.034 "dhgroup": "ffdhe4096" 00:11:20.034 } 00:11:20.034 } 00:11:20.034 ]' 00:11:20.034 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:20.034 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:20.034 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:20.034 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:20.034 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:20.293 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.293 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.293 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.551 08:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:00:ZTI0ZGQzMDBhZmM1NDU0OTRiNTc3ODU5MDE3Y2MwZDljYjczYmNmOWYwYzhlMjQ3k5VyGw==: --dhchap-ctrl-secret DHHC-1:03:Nzc0Y2I2NGMzM2E4MTg2MGVkZjgwODg0OGFjZTUzNDFhMzQ1NGNlOWFhOTNiZmRhYzNhN2NiNjllMjEzNTYwMn6RTFs=: 00:11:21.124 08:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.124 08:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:21.124 08:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:21.124 08:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.124 08:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:21.124 08:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:21.124 08:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:21.124 08:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:21.392 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:11:21.392 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:21.392 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:21.392 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:21.392 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:21.392 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.392 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.392 08:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:21.392 08:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.392 08:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:21.392 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.392 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.959 00:11:21.959 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:21.959 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:21.959 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.959 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.217 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.217 08:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:22.217 08:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.217 08:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:22.217 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:22.217 { 00:11:22.217 "cntlid": 27, 00:11:22.217 "qid": 0, 00:11:22.217 "state": "enabled", 00:11:22.217 "listen_address": { 00:11:22.217 "trtype": "TCP", 00:11:22.217 "adrfam": "IPv4", 00:11:22.217 "traddr": "10.0.0.2", 00:11:22.217 "trsvcid": "4420" 00:11:22.217 }, 00:11:22.217 "peer_address": { 00:11:22.217 "trtype": "TCP", 00:11:22.217 "adrfam": "IPv4", 00:11:22.217 "traddr": "10.0.0.1", 00:11:22.217 "trsvcid": "56110" 00:11:22.217 }, 00:11:22.217 "auth": { 00:11:22.217 "state": "completed", 00:11:22.217 "digest": "sha256", 00:11:22.217 "dhgroup": "ffdhe4096" 00:11:22.217 } 00:11:22.217 } 00:11:22.217 ]' 00:11:22.217 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:22.217 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:22.217 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:22.217 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:22.217 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:22.217 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.217 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.217 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.476 08:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:01:MTUzNTViZDI4M2M4ODIxN2NkOTcyNzg5YTlkMDU5YTQcxddQ: --dhchap-ctrl-secret DHHC-1:02:OTQzNjliN2NhOGUxMGUzMGQ2ZDlkNDI4MWE3ZGZkYjZlOGE2M2Y1NjZhNzBmOTNkFzMkSg==: 00:11:23.412 08:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.412 08:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:23.412 08:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:23.412 08:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.412 08:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:23.412 08:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:23.412 08:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:23.412 08:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:23.412 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:11:23.412 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:23.412 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:23.412 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:23.412 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:23.412 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.412 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.412 08:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:23.412 08:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.412 08:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:23.412 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.412 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.980 00:11:23.980 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.980 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.980 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.239 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.239 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.239 08:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:24.239 08:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.239 08:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:24.239 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:24.239 { 00:11:24.239 "cntlid": 29, 00:11:24.239 "qid": 0, 00:11:24.239 "state": "enabled", 00:11:24.239 "listen_address": { 00:11:24.239 "trtype": "TCP", 00:11:24.239 "adrfam": "IPv4", 00:11:24.239 "traddr": "10.0.0.2", 00:11:24.239 "trsvcid": "4420" 00:11:24.239 }, 00:11:24.239 "peer_address": { 00:11:24.239 "trtype": "TCP", 00:11:24.239 "adrfam": "IPv4", 00:11:24.239 "traddr": "10.0.0.1", 00:11:24.239 "trsvcid": "56118" 00:11:24.239 }, 00:11:24.239 "auth": { 00:11:24.239 "state": "completed", 00:11:24.239 "digest": "sha256", 00:11:24.239 "dhgroup": "ffdhe4096" 00:11:24.239 } 00:11:24.239 } 00:11:24.239 ]' 00:11:24.239 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:24.239 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:24.239 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:24.239 08:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:24.240 08:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:24.240 08:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.240 08:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.240 08:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.498 08:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:02:N2UwNGUxZGJmOTEzMmM4OTBkYTMxZjEyMzg3NjMwN2MzNTg4MDczNTMxOGFiYzQ3tnOQRw==: --dhchap-ctrl-secret DHHC-1:01:NTNlYzJjNjdjZjIzM2VkNjMxYjc5MmU0MGNkYjY1NjZ5rYSD: 00:11:25.433 08:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.433 08:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:25.433 08:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:25.433 08:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.433 08:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:25.433 08:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:25.433 08:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:25.433 08:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:25.433 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:11:25.433 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:25.433 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:25.433 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:25.433 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:25.433 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.433 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key3 00:11:25.433 08:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:25.433 08:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.433 08:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:25.433 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:25.433 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:26.000 00:11:26.000 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:26.000 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:26.000 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.000 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.000 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.000 08:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:26.000 08:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.000 08:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:26.000 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:26.000 { 00:11:26.000 "cntlid": 31, 00:11:26.000 "qid": 0, 00:11:26.000 "state": "enabled", 00:11:26.000 "listen_address": { 00:11:26.000 "trtype": "TCP", 00:11:26.000 "adrfam": "IPv4", 00:11:26.000 "traddr": "10.0.0.2", 00:11:26.000 "trsvcid": "4420" 00:11:26.000 }, 00:11:26.001 "peer_address": { 00:11:26.001 "trtype": "TCP", 00:11:26.001 "adrfam": "IPv4", 00:11:26.001 "traddr": "10.0.0.1", 00:11:26.001 "trsvcid": "56160" 00:11:26.001 }, 00:11:26.001 "auth": { 00:11:26.001 "state": "completed", 00:11:26.001 "digest": "sha256", 00:11:26.001 "dhgroup": "ffdhe4096" 00:11:26.001 } 00:11:26.001 } 00:11:26.001 ]' 00:11:26.001 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:26.259 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:26.259 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:26.259 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:26.259 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:26.259 08:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.259 08:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.259 08:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.517 08:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:03:MWYxMDYxNTQwNTMyMDZlZjhmMDM5ODE2OTgxOGU1NDVlMzUzY2I3MWMyZjhjOGUzMTRhM2MzODlmMTEzNmZiZda4AhE=: 00:11:27.084 08:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.084 08:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:27.084 08:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:27.084 08:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.084 08:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:27.084 08:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:27.084 08:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:27.084 08:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:27.084 08:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:27.343 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:11:27.343 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:27.343 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:27.343 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:27.343 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:27.343 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.343 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.343 08:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:27.343 08:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.343 08:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:27.343 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.343 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.909 00:11:27.909 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.909 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.909 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.167 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.167 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.167 08:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:28.167 08:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.167 08:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:28.167 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:28.167 { 00:11:28.167 "cntlid": 33, 00:11:28.167 "qid": 0, 00:11:28.167 "state": "enabled", 00:11:28.167 "listen_address": { 00:11:28.167 "trtype": "TCP", 00:11:28.167 "adrfam": "IPv4", 00:11:28.167 "traddr": "10.0.0.2", 00:11:28.167 "trsvcid": "4420" 00:11:28.167 }, 00:11:28.167 "peer_address": { 00:11:28.167 "trtype": "TCP", 00:11:28.167 "adrfam": "IPv4", 00:11:28.167 "traddr": "10.0.0.1", 00:11:28.167 "trsvcid": "37340" 00:11:28.167 }, 00:11:28.167 "auth": { 00:11:28.167 "state": "completed", 00:11:28.167 "digest": "sha256", 00:11:28.167 "dhgroup": "ffdhe6144" 00:11:28.167 } 00:11:28.167 } 00:11:28.167 ]' 00:11:28.167 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:28.167 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:28.167 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:28.167 08:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:28.167 08:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:28.440 08:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.440 08:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.440 08:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.710 08:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:00:ZTI0ZGQzMDBhZmM1NDU0OTRiNTc3ODU5MDE3Y2MwZDljYjczYmNmOWYwYzhlMjQ3k5VyGw==: --dhchap-ctrl-secret DHHC-1:03:Nzc0Y2I2NGMzM2E4MTg2MGVkZjgwODg0OGFjZTUzNDFhMzQ1NGNlOWFhOTNiZmRhYzNhN2NiNjllMjEzNTYwMn6RTFs=: 00:11:29.275 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.275 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:29.275 08:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:29.275 08:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.275 08:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:29.275 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:29.275 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:29.275 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:29.533 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:11:29.533 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:29.533 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:29.533 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:29.533 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:29.533 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.533 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.533 08:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:29.533 08:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.533 08:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:29.534 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.534 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:30.100 00:11:30.100 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:30.100 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:30.100 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.357 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.357 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.357 08:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:30.357 08:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.357 08:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:30.358 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:30.358 { 00:11:30.358 "cntlid": 35, 00:11:30.358 "qid": 0, 00:11:30.358 "state": "enabled", 00:11:30.358 "listen_address": { 00:11:30.358 "trtype": "TCP", 00:11:30.358 "adrfam": "IPv4", 00:11:30.358 "traddr": "10.0.0.2", 00:11:30.358 "trsvcid": "4420" 00:11:30.358 }, 00:11:30.358 "peer_address": { 00:11:30.358 "trtype": "TCP", 00:11:30.358 "adrfam": "IPv4", 00:11:30.358 "traddr": "10.0.0.1", 00:11:30.358 "trsvcid": "37368" 00:11:30.358 }, 00:11:30.358 "auth": { 00:11:30.358 "state": "completed", 00:11:30.358 "digest": "sha256", 00:11:30.358 "dhgroup": "ffdhe6144" 00:11:30.358 } 00:11:30.358 } 00:11:30.358 ]' 00:11:30.358 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:30.358 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:30.358 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:30.358 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:30.358 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:30.358 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.358 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.358 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.616 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:01:MTUzNTViZDI4M2M4ODIxN2NkOTcyNzg5YTlkMDU5YTQcxddQ: --dhchap-ctrl-secret DHHC-1:02:OTQzNjliN2NhOGUxMGUzMGQ2ZDlkNDI4MWE3ZGZkYjZlOGE2M2Y1NjZhNzBmOTNkFzMkSg==: 00:11:31.550 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.550 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:31.550 08:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.550 08:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.550 08:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.550 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:31.550 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:31.551 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:31.551 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:11:31.551 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:31.551 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:31.551 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:31.551 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:31.551 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.551 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.551 08:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.551 08:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.551 08:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.551 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.551 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:32.118 00:11:32.118 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:32.118 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:32.118 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.377 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.377 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.377 08:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:32.377 08:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.377 08:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:32.377 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:32.377 { 00:11:32.377 "cntlid": 37, 00:11:32.377 "qid": 0, 00:11:32.377 "state": "enabled", 00:11:32.377 "listen_address": { 00:11:32.377 "trtype": "TCP", 00:11:32.377 "adrfam": "IPv4", 00:11:32.377 "traddr": "10.0.0.2", 00:11:32.377 "trsvcid": "4420" 00:11:32.377 }, 00:11:32.377 "peer_address": { 00:11:32.377 "trtype": "TCP", 00:11:32.377 "adrfam": "IPv4", 00:11:32.377 "traddr": "10.0.0.1", 00:11:32.377 "trsvcid": "37400" 00:11:32.377 }, 00:11:32.377 "auth": { 00:11:32.377 "state": "completed", 00:11:32.377 "digest": "sha256", 00:11:32.377 "dhgroup": "ffdhe6144" 00:11:32.377 } 00:11:32.377 } 00:11:32.377 ]' 00:11:32.377 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:32.377 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:32.377 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:32.636 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:32.636 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:32.636 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.636 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.636 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.894 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:02:N2UwNGUxZGJmOTEzMmM4OTBkYTMxZjEyMzg3NjMwN2MzNTg4MDczNTMxOGFiYzQ3tnOQRw==: --dhchap-ctrl-secret DHHC-1:01:NTNlYzJjNjdjZjIzM2VkNjMxYjc5MmU0MGNkYjY1NjZ5rYSD: 00:11:33.460 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.461 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:33.461 08:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:33.461 08:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.461 08:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:33.461 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:33.461 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:33.461 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:33.719 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:11:33.719 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:33.719 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:33.719 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:33.719 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:33.719 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.719 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key3 00:11:33.719 08:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:33.719 08:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.719 08:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:33.719 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:33.719 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:34.285 00:11:34.285 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:34.285 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:34.285 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.543 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.543 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.543 08:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:34.543 08:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.543 08:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:34.543 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:34.543 { 00:11:34.543 "cntlid": 39, 00:11:34.543 "qid": 0, 00:11:34.543 "state": "enabled", 00:11:34.543 "listen_address": { 00:11:34.543 "trtype": "TCP", 00:11:34.543 "adrfam": "IPv4", 00:11:34.543 "traddr": "10.0.0.2", 00:11:34.543 "trsvcid": "4420" 00:11:34.543 }, 00:11:34.543 "peer_address": { 00:11:34.543 "trtype": "TCP", 00:11:34.543 "adrfam": "IPv4", 00:11:34.543 "traddr": "10.0.0.1", 00:11:34.543 "trsvcid": "37436" 00:11:34.543 }, 00:11:34.543 "auth": { 00:11:34.543 "state": "completed", 00:11:34.544 "digest": "sha256", 00:11:34.544 "dhgroup": "ffdhe6144" 00:11:34.544 } 00:11:34.544 } 00:11:34.544 ]' 00:11:34.544 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:34.544 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:34.544 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:34.544 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:34.544 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:34.802 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.802 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.802 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.060 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:03:MWYxMDYxNTQwNTMyMDZlZjhmMDM5ODE2OTgxOGU1NDVlMzUzY2I3MWMyZjhjOGUzMTRhM2MzODlmMTEzNmZiZda4AhE=: 00:11:35.626 08:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.626 08:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:35.626 08:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:35.626 08:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.626 08:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:35.626 08:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:35.626 08:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:35.626 08:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:35.626 08:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:35.885 08:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:11:35.885 08:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:35.885 08:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:35.885 08:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:35.885 08:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:35.885 08:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.885 08:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.885 08:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:35.885 08:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.885 08:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:35.885 08:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.885 08:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.820 00:11:36.820 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:36.820 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.820 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:36.820 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.820 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.820 08:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:36.820 08:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.820 08:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:36.820 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:36.820 { 00:11:36.820 "cntlid": 41, 00:11:36.820 "qid": 0, 00:11:36.820 "state": "enabled", 00:11:36.820 "listen_address": { 00:11:36.820 "trtype": "TCP", 00:11:36.820 "adrfam": "IPv4", 00:11:36.820 "traddr": "10.0.0.2", 00:11:36.820 "trsvcid": "4420" 00:11:36.820 }, 00:11:36.820 "peer_address": { 00:11:36.820 "trtype": "TCP", 00:11:36.820 "adrfam": "IPv4", 00:11:36.820 "traddr": "10.0.0.1", 00:11:36.820 "trsvcid": "37468" 00:11:36.820 }, 00:11:36.820 "auth": { 00:11:36.820 "state": "completed", 00:11:36.820 "digest": "sha256", 00:11:36.820 "dhgroup": "ffdhe8192" 00:11:36.820 } 00:11:36.820 } 00:11:36.820 ]' 00:11:36.820 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:37.078 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:37.078 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:37.078 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:37.078 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.078 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.078 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.078 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.337 08:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:00:ZTI0ZGQzMDBhZmM1NDU0OTRiNTc3ODU5MDE3Y2MwZDljYjczYmNmOWYwYzhlMjQ3k5VyGw==: --dhchap-ctrl-secret DHHC-1:03:Nzc0Y2I2NGMzM2E4MTg2MGVkZjgwODg0OGFjZTUzNDFhMzQ1NGNlOWFhOTNiZmRhYzNhN2NiNjllMjEzNTYwMn6RTFs=: 00:11:37.903 08:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.903 08:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:37.903 08:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:37.903 08:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.903 08:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:37.903 08:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:37.903 08:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:37.903 08:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:38.162 08:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:11:38.162 08:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:38.162 08:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:38.162 08:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:38.162 08:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:38.162 08:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.162 08:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.162 08:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:38.162 08:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.162 08:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:38.162 08:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.162 08:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.729 00:11:38.986 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:38.986 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:38.986 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.244 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.244 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.244 08:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:39.244 08:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.244 08:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:39.244 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:39.244 { 00:11:39.244 "cntlid": 43, 00:11:39.244 "qid": 0, 00:11:39.244 "state": "enabled", 00:11:39.244 "listen_address": { 00:11:39.244 "trtype": "TCP", 00:11:39.244 "adrfam": "IPv4", 00:11:39.244 "traddr": "10.0.0.2", 00:11:39.244 "trsvcid": "4420" 00:11:39.244 }, 00:11:39.244 "peer_address": { 00:11:39.244 "trtype": "TCP", 00:11:39.244 "adrfam": "IPv4", 00:11:39.244 "traddr": "10.0.0.1", 00:11:39.244 "trsvcid": "50258" 00:11:39.244 }, 00:11:39.244 "auth": { 00:11:39.244 "state": "completed", 00:11:39.244 "digest": "sha256", 00:11:39.244 "dhgroup": "ffdhe8192" 00:11:39.244 } 00:11:39.244 } 00:11:39.244 ]' 00:11:39.244 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:39.244 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:39.244 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:39.244 08:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:39.244 08:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:39.244 08:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.244 08:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.244 08:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.502 08:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:01:MTUzNTViZDI4M2M4ODIxN2NkOTcyNzg5YTlkMDU5YTQcxddQ: --dhchap-ctrl-secret DHHC-1:02:OTQzNjliN2NhOGUxMGUzMGQ2ZDlkNDI4MWE3ZGZkYjZlOGE2M2Y1NjZhNzBmOTNkFzMkSg==: 00:11:40.438 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.438 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:40.438 08:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:40.438 08:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.438 08:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:40.438 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:40.438 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:40.438 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:40.438 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:11:40.438 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:40.438 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:40.438 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:40.438 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:40.438 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.438 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.438 08:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:40.438 08:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.438 08:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:40.438 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.438 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.373 00:11:41.373 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:41.373 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:41.373 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.373 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.373 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.373 08:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:41.373 08:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.373 08:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:41.373 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:41.373 { 00:11:41.373 "cntlid": 45, 00:11:41.373 "qid": 0, 00:11:41.373 "state": "enabled", 00:11:41.373 "listen_address": { 00:11:41.373 "trtype": "TCP", 00:11:41.373 "adrfam": "IPv4", 00:11:41.373 "traddr": "10.0.0.2", 00:11:41.373 "trsvcid": "4420" 00:11:41.373 }, 00:11:41.373 "peer_address": { 00:11:41.373 "trtype": "TCP", 00:11:41.373 "adrfam": "IPv4", 00:11:41.373 "traddr": "10.0.0.1", 00:11:41.373 "trsvcid": "50288" 00:11:41.373 }, 00:11:41.373 "auth": { 00:11:41.373 "state": "completed", 00:11:41.373 "digest": "sha256", 00:11:41.373 "dhgroup": "ffdhe8192" 00:11:41.373 } 00:11:41.373 } 00:11:41.373 ]' 00:11:41.373 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:41.373 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:41.373 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:41.631 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:41.631 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:41.631 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.631 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.631 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.889 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:02:N2UwNGUxZGJmOTEzMmM4OTBkYTMxZjEyMzg3NjMwN2MzNTg4MDczNTMxOGFiYzQ3tnOQRw==: --dhchap-ctrl-secret DHHC-1:01:NTNlYzJjNjdjZjIzM2VkNjMxYjc5MmU0MGNkYjY1NjZ5rYSD: 00:11:42.454 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.454 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:42.454 08:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:42.454 08:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.454 08:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:42.454 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:42.454 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:42.454 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:42.713 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:11:42.713 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.713 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:42.713 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:42.713 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:42.713 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.713 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key3 00:11:42.713 08:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:42.713 08:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.713 08:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:42.713 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:42.713 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:43.649 00:11:43.649 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:43.649 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.649 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:43.649 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.649 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.649 08:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:43.649 08:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.649 08:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:43.649 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:43.649 { 00:11:43.649 "cntlid": 47, 00:11:43.649 "qid": 0, 00:11:43.649 "state": "enabled", 00:11:43.649 "listen_address": { 00:11:43.649 "trtype": "TCP", 00:11:43.649 "adrfam": "IPv4", 00:11:43.649 "traddr": "10.0.0.2", 00:11:43.649 "trsvcid": "4420" 00:11:43.649 }, 00:11:43.649 "peer_address": { 00:11:43.649 "trtype": "TCP", 00:11:43.649 "adrfam": "IPv4", 00:11:43.649 "traddr": "10.0.0.1", 00:11:43.649 "trsvcid": "50312" 00:11:43.649 }, 00:11:43.649 "auth": { 00:11:43.649 "state": "completed", 00:11:43.649 "digest": "sha256", 00:11:43.649 "dhgroup": "ffdhe8192" 00:11:43.649 } 00:11:43.649 } 00:11:43.649 ]' 00:11:43.649 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.908 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:43.908 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.908 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:43.908 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:43.908 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.908 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.908 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.167 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:03:MWYxMDYxNTQwNTMyMDZlZjhmMDM5ODE2OTgxOGU1NDVlMzUzY2I3MWMyZjhjOGUzMTRhM2MzODlmMTEzNmZiZda4AhE=: 00:11:44.735 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.735 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:44.735 08:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:44.735 08:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.735 08:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:44.735 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:44.735 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:44.735 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.735 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:44.735 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:44.994 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:11:44.994 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.994 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:44.994 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:44.994 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:44.994 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.994 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.994 08:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:44.994 08:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.994 08:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:44.994 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.994 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.562 00:11:45.562 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:45.562 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.562 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.562 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.562 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.562 08:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:45.562 08:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.562 08:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:45.562 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.562 { 00:11:45.562 "cntlid": 49, 00:11:45.562 "qid": 0, 00:11:45.562 "state": "enabled", 00:11:45.562 "listen_address": { 00:11:45.562 "trtype": "TCP", 00:11:45.562 "adrfam": "IPv4", 00:11:45.562 "traddr": "10.0.0.2", 00:11:45.562 "trsvcid": "4420" 00:11:45.562 }, 00:11:45.562 "peer_address": { 00:11:45.562 "trtype": "TCP", 00:11:45.562 "adrfam": "IPv4", 00:11:45.562 "traddr": "10.0.0.1", 00:11:45.562 "trsvcid": "50346" 00:11:45.562 }, 00:11:45.562 "auth": { 00:11:45.562 "state": "completed", 00:11:45.562 "digest": "sha384", 00:11:45.562 "dhgroup": "null" 00:11:45.562 } 00:11:45.562 } 00:11:45.562 ]' 00:11:45.562 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.821 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:45.821 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.821 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:45.821 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.821 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.821 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.821 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.080 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:00:ZTI0ZGQzMDBhZmM1NDU0OTRiNTc3ODU5MDE3Y2MwZDljYjczYmNmOWYwYzhlMjQ3k5VyGw==: --dhchap-ctrl-secret DHHC-1:03:Nzc0Y2I2NGMzM2E4MTg2MGVkZjgwODg0OGFjZTUzNDFhMzQ1NGNlOWFhOTNiZmRhYzNhN2NiNjllMjEzNTYwMn6RTFs=: 00:11:46.648 08:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.648 08:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:46.648 08:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:46.648 08:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.648 08:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:46.648 08:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.648 08:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:46.648 08:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:46.907 08:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:46.907 08:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:46.907 08:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:46.907 08:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:46.907 08:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:46.907 08:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.907 08:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.907 08:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:46.908 08:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.908 08:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:46.908 08:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.908 08:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.167 00:11:47.425 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.425 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:47.425 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.684 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.684 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.684 08:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:47.684 08:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.684 08:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:47.684 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:47.684 { 00:11:47.684 "cntlid": 51, 00:11:47.684 "qid": 0, 00:11:47.684 "state": "enabled", 00:11:47.684 "listen_address": { 00:11:47.684 "trtype": "TCP", 00:11:47.684 "adrfam": "IPv4", 00:11:47.684 "traddr": "10.0.0.2", 00:11:47.684 "trsvcid": "4420" 00:11:47.684 }, 00:11:47.684 "peer_address": { 00:11:47.684 "trtype": "TCP", 00:11:47.684 "adrfam": "IPv4", 00:11:47.684 "traddr": "10.0.0.1", 00:11:47.684 "trsvcid": "50374" 00:11:47.684 }, 00:11:47.684 "auth": { 00:11:47.684 "state": "completed", 00:11:47.684 "digest": "sha384", 00:11:47.684 "dhgroup": "null" 00:11:47.684 } 00:11:47.684 } 00:11:47.684 ]' 00:11:47.684 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.684 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:47.684 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.684 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:47.684 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.684 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.684 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.684 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.943 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:01:MTUzNTViZDI4M2M4ODIxN2NkOTcyNzg5YTlkMDU5YTQcxddQ: --dhchap-ctrl-secret DHHC-1:02:OTQzNjliN2NhOGUxMGUzMGQ2ZDlkNDI4MWE3ZGZkYjZlOGE2M2Y1NjZhNzBmOTNkFzMkSg==: 00:11:48.511 08:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.511 08:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:48.511 08:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:48.511 08:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.511 08:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:48.511 08:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:48.511 08:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:48.511 08:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:48.771 08:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:48.771 08:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:48.771 08:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:48.771 08:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:48.771 08:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:48.771 08:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.771 08:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.771 08:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:48.771 08:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.771 08:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:48.771 08:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.771 08:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.339 00:11:49.339 08:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:49.339 08:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:49.339 08:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.339 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.597 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.597 08:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:49.597 08:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.597 08:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:49.597 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:49.597 { 00:11:49.597 "cntlid": 53, 00:11:49.597 "qid": 0, 00:11:49.597 "state": "enabled", 00:11:49.597 "listen_address": { 00:11:49.597 "trtype": "TCP", 00:11:49.597 "adrfam": "IPv4", 00:11:49.597 "traddr": "10.0.0.2", 00:11:49.597 "trsvcid": "4420" 00:11:49.597 }, 00:11:49.597 "peer_address": { 00:11:49.597 "trtype": "TCP", 00:11:49.597 "adrfam": "IPv4", 00:11:49.597 "traddr": "10.0.0.1", 00:11:49.597 "trsvcid": "52628" 00:11:49.597 }, 00:11:49.597 "auth": { 00:11:49.597 "state": "completed", 00:11:49.597 "digest": "sha384", 00:11:49.597 "dhgroup": "null" 00:11:49.597 } 00:11:49.597 } 00:11:49.597 ]' 00:11:49.598 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:49.598 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:49.598 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:49.598 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:49.598 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:49.598 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.598 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.598 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.856 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:02:N2UwNGUxZGJmOTEzMmM4OTBkYTMxZjEyMzg3NjMwN2MzNTg4MDczNTMxOGFiYzQ3tnOQRw==: --dhchap-ctrl-secret DHHC-1:01:NTNlYzJjNjdjZjIzM2VkNjMxYjc5MmU0MGNkYjY1NjZ5rYSD: 00:11:50.794 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.794 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:50.794 08:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:50.794 08:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.794 08:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:50.794 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.794 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:50.794 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:50.794 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:50.794 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:50.794 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:50.794 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:50.794 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:50.794 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.794 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key3 00:11:50.794 08:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:50.794 08:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.794 08:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:50.794 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:50.794 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:51.051 00:11:51.051 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:51.051 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.051 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:51.310 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.310 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.310 08:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:51.310 08:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.310 08:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:51.310 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:51.310 { 00:11:51.310 "cntlid": 55, 00:11:51.310 "qid": 0, 00:11:51.310 "state": "enabled", 00:11:51.310 "listen_address": { 00:11:51.310 "trtype": "TCP", 00:11:51.310 "adrfam": "IPv4", 00:11:51.310 "traddr": "10.0.0.2", 00:11:51.310 "trsvcid": "4420" 00:11:51.310 }, 00:11:51.310 "peer_address": { 00:11:51.310 "trtype": "TCP", 00:11:51.310 "adrfam": "IPv4", 00:11:51.310 "traddr": "10.0.0.1", 00:11:51.310 "trsvcid": "52670" 00:11:51.310 }, 00:11:51.310 "auth": { 00:11:51.310 "state": "completed", 00:11:51.310 "digest": "sha384", 00:11:51.310 "dhgroup": "null" 00:11:51.310 } 00:11:51.310 } 00:11:51.310 ]' 00:11:51.310 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:51.569 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:51.569 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.569 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:51.569 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.569 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.569 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.569 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.828 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:03:MWYxMDYxNTQwNTMyMDZlZjhmMDM5ODE2OTgxOGU1NDVlMzUzY2I3MWMyZjhjOGUzMTRhM2MzODlmMTEzNmZiZda4AhE=: 00:11:52.395 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.395 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:52.395 08:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:52.395 08:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.395 08:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:52.395 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:52.395 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:52.395 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:52.396 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:52.654 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:52.654 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:52.654 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:52.654 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:52.654 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:52.654 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.654 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.654 08:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:52.654 08:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.654 08:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:52.654 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.654 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.913 00:11:52.913 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:52.913 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.913 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.172 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.172 08:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.172 08:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:53.172 08:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.172 08:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:53.172 08:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.172 { 00:11:53.172 "cntlid": 57, 00:11:53.172 "qid": 0, 00:11:53.172 "state": "enabled", 00:11:53.172 "listen_address": { 00:11:53.172 "trtype": "TCP", 00:11:53.172 "adrfam": "IPv4", 00:11:53.172 "traddr": "10.0.0.2", 00:11:53.172 "trsvcid": "4420" 00:11:53.172 }, 00:11:53.172 "peer_address": { 00:11:53.172 "trtype": "TCP", 00:11:53.172 "adrfam": "IPv4", 00:11:53.172 "traddr": "10.0.0.1", 00:11:53.172 "trsvcid": "52710" 00:11:53.172 }, 00:11:53.172 "auth": { 00:11:53.172 "state": "completed", 00:11:53.172 "digest": "sha384", 00:11:53.172 "dhgroup": "ffdhe2048" 00:11:53.172 } 00:11:53.172 } 00:11:53.172 ]' 00:11:53.172 08:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.431 08:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:53.431 08:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:53.431 08:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:53.431 08:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:53.431 08:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.431 08:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.431 08:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.690 08:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:00:ZTI0ZGQzMDBhZmM1NDU0OTRiNTc3ODU5MDE3Y2MwZDljYjczYmNmOWYwYzhlMjQ3k5VyGw==: --dhchap-ctrl-secret DHHC-1:03:Nzc0Y2I2NGMzM2E4MTg2MGVkZjgwODg0OGFjZTUzNDFhMzQ1NGNlOWFhOTNiZmRhYzNhN2NiNjllMjEzNTYwMn6RTFs=: 00:11:54.258 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.258 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:54.258 08:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:54.258 08:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.516 08:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:54.516 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.516 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:54.516 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:54.775 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:11:54.775 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.775 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:54.775 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:54.775 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:54.775 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.775 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.775 08:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:54.775 08:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.775 08:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:54.775 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.775 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.032 00:11:55.032 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:55.032 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.032 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:55.290 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.290 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.290 08:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:55.290 08:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.290 08:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:55.290 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.290 { 00:11:55.290 "cntlid": 59, 00:11:55.290 "qid": 0, 00:11:55.290 "state": "enabled", 00:11:55.290 "listen_address": { 00:11:55.290 "trtype": "TCP", 00:11:55.290 "adrfam": "IPv4", 00:11:55.290 "traddr": "10.0.0.2", 00:11:55.290 "trsvcid": "4420" 00:11:55.290 }, 00:11:55.290 "peer_address": { 00:11:55.290 "trtype": "TCP", 00:11:55.290 "adrfam": "IPv4", 00:11:55.290 "traddr": "10.0.0.1", 00:11:55.290 "trsvcid": "52740" 00:11:55.290 }, 00:11:55.290 "auth": { 00:11:55.290 "state": "completed", 00:11:55.291 "digest": "sha384", 00:11:55.291 "dhgroup": "ffdhe2048" 00:11:55.291 } 00:11:55.291 } 00:11:55.291 ]' 00:11:55.291 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.291 08:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:55.291 08:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.291 08:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:55.291 08:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.291 08:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.291 08:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.291 08:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.857 08:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:01:MTUzNTViZDI4M2M4ODIxN2NkOTcyNzg5YTlkMDU5YTQcxddQ: --dhchap-ctrl-secret DHHC-1:02:OTQzNjliN2NhOGUxMGUzMGQ2ZDlkNDI4MWE3ZGZkYjZlOGE2M2Y1NjZhNzBmOTNkFzMkSg==: 00:11:56.449 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.449 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:56.449 08:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:56.449 08:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.449 08:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:56.449 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.449 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:56.449 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:56.708 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:11:56.708 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:56.708 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:56.708 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:56.708 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:56.708 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.708 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.708 08:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:56.708 08:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.708 08:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:56.708 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.708 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.967 00:11:56.967 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:56.967 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:56.967 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.226 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.226 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.226 08:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.226 08:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.226 08:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.226 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:57.226 { 00:11:57.226 "cntlid": 61, 00:11:57.226 "qid": 0, 00:11:57.226 "state": "enabled", 00:11:57.226 "listen_address": { 00:11:57.226 "trtype": "TCP", 00:11:57.226 "adrfam": "IPv4", 00:11:57.226 "traddr": "10.0.0.2", 00:11:57.226 "trsvcid": "4420" 00:11:57.226 }, 00:11:57.226 "peer_address": { 00:11:57.226 "trtype": "TCP", 00:11:57.226 "adrfam": "IPv4", 00:11:57.226 "traddr": "10.0.0.1", 00:11:57.226 "trsvcid": "52776" 00:11:57.226 }, 00:11:57.226 "auth": { 00:11:57.226 "state": "completed", 00:11:57.226 "digest": "sha384", 00:11:57.226 "dhgroup": "ffdhe2048" 00:11:57.226 } 00:11:57.226 } 00:11:57.226 ]' 00:11:57.226 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:57.485 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:57.485 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:57.485 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:57.485 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:57.485 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.485 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.485 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.744 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:02:N2UwNGUxZGJmOTEzMmM4OTBkYTMxZjEyMzg3NjMwN2MzNTg4MDczNTMxOGFiYzQ3tnOQRw==: --dhchap-ctrl-secret DHHC-1:01:NTNlYzJjNjdjZjIzM2VkNjMxYjc5MmU0MGNkYjY1NjZ5rYSD: 00:11:58.311 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.311 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:11:58.311 08:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:58.311 08:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.311 08:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:58.311 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.311 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:58.311 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:58.570 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:11:58.570 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:58.570 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:58.570 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:58.570 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:58.570 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.570 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key3 00:11:58.570 08:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:58.570 08:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.570 08:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:58.570 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:58.570 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:58.829 00:11:58.829 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:58.829 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.829 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.088 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.088 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.088 08:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:59.088 08:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.088 08:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:59.088 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.088 { 00:11:59.088 "cntlid": 63, 00:11:59.088 "qid": 0, 00:11:59.088 "state": "enabled", 00:11:59.088 "listen_address": { 00:11:59.088 "trtype": "TCP", 00:11:59.088 "adrfam": "IPv4", 00:11:59.088 "traddr": "10.0.0.2", 00:11:59.088 "trsvcid": "4420" 00:11:59.088 }, 00:11:59.088 "peer_address": { 00:11:59.088 "trtype": "TCP", 00:11:59.088 "adrfam": "IPv4", 00:11:59.088 "traddr": "10.0.0.1", 00:11:59.088 "trsvcid": "50744" 00:11:59.088 }, 00:11:59.088 "auth": { 00:11:59.088 "state": "completed", 00:11:59.088 "digest": "sha384", 00:11:59.088 "dhgroup": "ffdhe2048" 00:11:59.088 } 00:11:59.088 } 00:11:59.088 ]' 00:11:59.088 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.348 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:59.348 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:59.348 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:59.348 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:59.348 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.348 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.348 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.607 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:03:MWYxMDYxNTQwNTMyMDZlZjhmMDM5ODE2OTgxOGU1NDVlMzUzY2I3MWMyZjhjOGUzMTRhM2MzODlmMTEzNmZiZda4AhE=: 00:12:00.174 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.174 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:00.174 08:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:00.174 08:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.174 08:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:00.174 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:00.175 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.175 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:00.175 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:00.434 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:12:00.434 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:00.434 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:00.434 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:00.434 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:00.434 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.434 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.434 08:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:00.434 08:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.434 08:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:00.434 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.434 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.002 00:12:01.002 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:01.002 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:01.002 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.262 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.262 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.262 08:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:01.262 08:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.262 08:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:01.262 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:01.262 { 00:12:01.262 "cntlid": 65, 00:12:01.262 "qid": 0, 00:12:01.262 "state": "enabled", 00:12:01.262 "listen_address": { 00:12:01.262 "trtype": "TCP", 00:12:01.262 "adrfam": "IPv4", 00:12:01.262 "traddr": "10.0.0.2", 00:12:01.262 "trsvcid": "4420" 00:12:01.262 }, 00:12:01.262 "peer_address": { 00:12:01.262 "trtype": "TCP", 00:12:01.262 "adrfam": "IPv4", 00:12:01.262 "traddr": "10.0.0.1", 00:12:01.262 "trsvcid": "50754" 00:12:01.262 }, 00:12:01.262 "auth": { 00:12:01.262 "state": "completed", 00:12:01.262 "digest": "sha384", 00:12:01.262 "dhgroup": "ffdhe3072" 00:12:01.262 } 00:12:01.262 } 00:12:01.262 ]' 00:12:01.262 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:01.262 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:01.262 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:01.262 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:01.262 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:01.262 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.262 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.262 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.521 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:00:ZTI0ZGQzMDBhZmM1NDU0OTRiNTc3ODU5MDE3Y2MwZDljYjczYmNmOWYwYzhlMjQ3k5VyGw==: --dhchap-ctrl-secret DHHC-1:03:Nzc0Y2I2NGMzM2E4MTg2MGVkZjgwODg0OGFjZTUzNDFhMzQ1NGNlOWFhOTNiZmRhYzNhN2NiNjllMjEzNTYwMn6RTFs=: 00:12:02.091 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.091 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:02.349 08:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:02.349 08:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.349 08:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:02.349 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:02.349 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:02.349 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:02.607 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:12:02.607 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:02.607 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:02.607 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:02.607 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:02.607 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.607 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.607 08:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:02.607 08:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.607 08:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:02.608 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.608 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.866 00:12:02.866 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:02.866 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.866 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:03.125 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.125 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.125 08:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.125 08:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.125 08:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.125 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:03.125 { 00:12:03.125 "cntlid": 67, 00:12:03.125 "qid": 0, 00:12:03.125 "state": "enabled", 00:12:03.125 "listen_address": { 00:12:03.125 "trtype": "TCP", 00:12:03.125 "adrfam": "IPv4", 00:12:03.125 "traddr": "10.0.0.2", 00:12:03.125 "trsvcid": "4420" 00:12:03.125 }, 00:12:03.125 "peer_address": { 00:12:03.125 "trtype": "TCP", 00:12:03.125 "adrfam": "IPv4", 00:12:03.125 "traddr": "10.0.0.1", 00:12:03.125 "trsvcid": "50772" 00:12:03.125 }, 00:12:03.125 "auth": { 00:12:03.125 "state": "completed", 00:12:03.125 "digest": "sha384", 00:12:03.125 "dhgroup": "ffdhe3072" 00:12:03.125 } 00:12:03.125 } 00:12:03.125 ]' 00:12:03.125 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:03.125 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:03.125 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:03.125 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:03.125 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:03.384 08:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.384 08:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.384 08:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.642 08:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:01:MTUzNTViZDI4M2M4ODIxN2NkOTcyNzg5YTlkMDU5YTQcxddQ: --dhchap-ctrl-secret DHHC-1:02:OTQzNjliN2NhOGUxMGUzMGQ2ZDlkNDI4MWE3ZGZkYjZlOGE2M2Y1NjZhNzBmOTNkFzMkSg==: 00:12:04.230 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.230 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:04.230 08:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.230 08:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.230 08:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.231 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:04.231 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:04.231 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:04.489 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:12:04.489 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:04.489 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:04.489 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:04.489 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:04.489 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.489 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.489 08:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.489 08:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.489 08:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.489 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.489 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.056 00:12:05.056 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:05.056 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.056 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:05.314 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.314 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.314 08:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:05.314 08:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.314 08:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:05.314 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:05.314 { 00:12:05.314 "cntlid": 69, 00:12:05.314 "qid": 0, 00:12:05.314 "state": "enabled", 00:12:05.314 "listen_address": { 00:12:05.314 "trtype": "TCP", 00:12:05.314 "adrfam": "IPv4", 00:12:05.314 "traddr": "10.0.0.2", 00:12:05.314 "trsvcid": "4420" 00:12:05.314 }, 00:12:05.314 "peer_address": { 00:12:05.314 "trtype": "TCP", 00:12:05.314 "adrfam": "IPv4", 00:12:05.314 "traddr": "10.0.0.1", 00:12:05.314 "trsvcid": "50802" 00:12:05.314 }, 00:12:05.314 "auth": { 00:12:05.314 "state": "completed", 00:12:05.314 "digest": "sha384", 00:12:05.314 "dhgroup": "ffdhe3072" 00:12:05.314 } 00:12:05.314 } 00:12:05.314 ]' 00:12:05.314 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:05.314 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:05.314 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:05.314 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:05.314 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:05.314 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.314 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.314 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.573 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:02:N2UwNGUxZGJmOTEzMmM4OTBkYTMxZjEyMzg3NjMwN2MzNTg4MDczNTMxOGFiYzQ3tnOQRw==: --dhchap-ctrl-secret DHHC-1:01:NTNlYzJjNjdjZjIzM2VkNjMxYjc5MmU0MGNkYjY1NjZ5rYSD: 00:12:06.509 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.509 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:06.509 08:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:06.509 08:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.509 08:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:06.509 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:06.509 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:06.509 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:06.509 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:12:06.509 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:06.509 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:06.509 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:06.509 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:06.509 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.509 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key3 00:12:06.509 08:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:06.509 08:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.768 08:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:06.768 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:06.768 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:07.027 00:12:07.027 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:07.027 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:07.027 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.286 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.286 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.286 08:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:07.286 08:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.286 08:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:07.286 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:07.286 { 00:12:07.286 "cntlid": 71, 00:12:07.286 "qid": 0, 00:12:07.286 "state": "enabled", 00:12:07.286 "listen_address": { 00:12:07.286 "trtype": "TCP", 00:12:07.286 "adrfam": "IPv4", 00:12:07.286 "traddr": "10.0.0.2", 00:12:07.286 "trsvcid": "4420" 00:12:07.286 }, 00:12:07.286 "peer_address": { 00:12:07.286 "trtype": "TCP", 00:12:07.286 "adrfam": "IPv4", 00:12:07.286 "traddr": "10.0.0.1", 00:12:07.286 "trsvcid": "50824" 00:12:07.286 }, 00:12:07.286 "auth": { 00:12:07.286 "state": "completed", 00:12:07.286 "digest": "sha384", 00:12:07.286 "dhgroup": "ffdhe3072" 00:12:07.286 } 00:12:07.286 } 00:12:07.286 ]' 00:12:07.286 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:07.286 08:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:07.286 08:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:07.286 08:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:07.286 08:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:07.286 08:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.286 08:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.286 08:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.545 08:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:03:MWYxMDYxNTQwNTMyMDZlZjhmMDM5ODE2OTgxOGU1NDVlMzUzY2I3MWMyZjhjOGUzMTRhM2MzODlmMTEzNmZiZda4AhE=: 00:12:08.481 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.481 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:08.481 08:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:08.481 08:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.481 08:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:08.481 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:08.481 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:08.481 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:08.481 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:08.739 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:12:08.739 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:08.739 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:08.739 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:08.739 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:08.739 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.739 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.739 08:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:08.739 08:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.739 08:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:08.739 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.739 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.998 00:12:08.998 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:08.999 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:08.999 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.257 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.257 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.257 08:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:09.257 08:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.257 08:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:09.257 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:09.257 { 00:12:09.257 "cntlid": 73, 00:12:09.257 "qid": 0, 00:12:09.257 "state": "enabled", 00:12:09.257 "listen_address": { 00:12:09.257 "trtype": "TCP", 00:12:09.257 "adrfam": "IPv4", 00:12:09.257 "traddr": "10.0.0.2", 00:12:09.257 "trsvcid": "4420" 00:12:09.257 }, 00:12:09.257 "peer_address": { 00:12:09.257 "trtype": "TCP", 00:12:09.257 "adrfam": "IPv4", 00:12:09.257 "traddr": "10.0.0.1", 00:12:09.257 "trsvcid": "45468" 00:12:09.257 }, 00:12:09.257 "auth": { 00:12:09.257 "state": "completed", 00:12:09.257 "digest": "sha384", 00:12:09.257 "dhgroup": "ffdhe4096" 00:12:09.257 } 00:12:09.257 } 00:12:09.257 ]' 00:12:09.257 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:09.257 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:09.257 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:09.516 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:09.516 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:09.516 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.516 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.516 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.774 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:00:ZTI0ZGQzMDBhZmM1NDU0OTRiNTc3ODU5MDE3Y2MwZDljYjczYmNmOWYwYzhlMjQ3k5VyGw==: --dhchap-ctrl-secret DHHC-1:03:Nzc0Y2I2NGMzM2E4MTg2MGVkZjgwODg0OGFjZTUzNDFhMzQ1NGNlOWFhOTNiZmRhYzNhN2NiNjllMjEzNTYwMn6RTFs=: 00:12:10.341 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.341 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:10.341 08:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:10.341 08:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.341 08:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:10.341 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:10.341 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:10.341 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:10.599 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:12:10.599 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:10.599 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:10.599 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:10.599 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:10.599 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.599 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.599 08:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:10.599 08:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.856 08:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:10.856 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.856 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.114 00:12:11.114 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:11.114 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:11.114 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.371 08:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.371 08:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.371 08:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:11.371 08:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.371 08:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:11.371 08:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:11.371 { 00:12:11.371 "cntlid": 75, 00:12:11.371 "qid": 0, 00:12:11.371 "state": "enabled", 00:12:11.371 "listen_address": { 00:12:11.371 "trtype": "TCP", 00:12:11.371 "adrfam": "IPv4", 00:12:11.371 "traddr": "10.0.0.2", 00:12:11.371 "trsvcid": "4420" 00:12:11.371 }, 00:12:11.371 "peer_address": { 00:12:11.371 "trtype": "TCP", 00:12:11.371 "adrfam": "IPv4", 00:12:11.371 "traddr": "10.0.0.1", 00:12:11.371 "trsvcid": "45490" 00:12:11.371 }, 00:12:11.371 "auth": { 00:12:11.371 "state": "completed", 00:12:11.371 "digest": "sha384", 00:12:11.371 "dhgroup": "ffdhe4096" 00:12:11.371 } 00:12:11.371 } 00:12:11.371 ]' 00:12:11.371 08:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:11.628 08:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:11.628 08:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:11.628 08:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:11.628 08:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:11.628 08:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.628 08:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.628 08:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.885 08:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:01:MTUzNTViZDI4M2M4ODIxN2NkOTcyNzg5YTlkMDU5YTQcxddQ: --dhchap-ctrl-secret DHHC-1:02:OTQzNjliN2NhOGUxMGUzMGQ2ZDlkNDI4MWE3ZGZkYjZlOGE2M2Y1NjZhNzBmOTNkFzMkSg==: 00:12:12.452 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.452 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:12.452 08:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:12.452 08:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.452 08:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:12.452 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:12.452 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:12.452 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:12.710 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:12:12.710 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:12.710 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:12.710 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:12.710 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:12.710 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.710 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.710 08:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:12.710 08:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.710 08:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:12.710 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.710 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.276 00:12:13.276 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:13.276 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.276 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:13.535 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.535 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.535 08:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:13.535 08:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.535 08:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:13.535 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:13.535 { 00:12:13.535 "cntlid": 77, 00:12:13.535 "qid": 0, 00:12:13.535 "state": "enabled", 00:12:13.535 "listen_address": { 00:12:13.535 "trtype": "TCP", 00:12:13.535 "adrfam": "IPv4", 00:12:13.535 "traddr": "10.0.0.2", 00:12:13.535 "trsvcid": "4420" 00:12:13.535 }, 00:12:13.535 "peer_address": { 00:12:13.535 "trtype": "TCP", 00:12:13.535 "adrfam": "IPv4", 00:12:13.535 "traddr": "10.0.0.1", 00:12:13.535 "trsvcid": "45530" 00:12:13.535 }, 00:12:13.535 "auth": { 00:12:13.535 "state": "completed", 00:12:13.535 "digest": "sha384", 00:12:13.535 "dhgroup": "ffdhe4096" 00:12:13.535 } 00:12:13.535 } 00:12:13.535 ]' 00:12:13.535 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:13.535 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:13.535 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:13.535 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:13.535 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:13.535 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.535 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.535 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.794 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:02:N2UwNGUxZGJmOTEzMmM4OTBkYTMxZjEyMzg3NjMwN2MzNTg4MDczNTMxOGFiYzQ3tnOQRw==: --dhchap-ctrl-secret DHHC-1:01:NTNlYzJjNjdjZjIzM2VkNjMxYjc5MmU0MGNkYjY1NjZ5rYSD: 00:12:14.362 08:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.362 08:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:14.362 08:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:14.362 08:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.362 08:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:14.362 08:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:14.362 08:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:14.362 08:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:14.929 08:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:12:14.929 08:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:14.929 08:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:14.929 08:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:14.929 08:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:14.929 08:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.929 08:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key3 00:12:14.929 08:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:14.929 08:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.929 08:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:14.929 08:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:14.929 08:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:15.188 00:12:15.188 08:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:15.188 08:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:15.188 08:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.446 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.446 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.446 08:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:15.446 08:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.446 08:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:15.446 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:15.446 { 00:12:15.446 "cntlid": 79, 00:12:15.446 "qid": 0, 00:12:15.446 "state": "enabled", 00:12:15.446 "listen_address": { 00:12:15.446 "trtype": "TCP", 00:12:15.446 "adrfam": "IPv4", 00:12:15.446 "traddr": "10.0.0.2", 00:12:15.446 "trsvcid": "4420" 00:12:15.446 }, 00:12:15.446 "peer_address": { 00:12:15.446 "trtype": "TCP", 00:12:15.446 "adrfam": "IPv4", 00:12:15.446 "traddr": "10.0.0.1", 00:12:15.446 "trsvcid": "45568" 00:12:15.446 }, 00:12:15.446 "auth": { 00:12:15.446 "state": "completed", 00:12:15.446 "digest": "sha384", 00:12:15.446 "dhgroup": "ffdhe4096" 00:12:15.446 } 00:12:15.446 } 00:12:15.446 ]' 00:12:15.446 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:15.446 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:15.446 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:15.446 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:15.446 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:15.704 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.704 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.705 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.705 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:03:MWYxMDYxNTQwNTMyMDZlZjhmMDM5ODE2OTgxOGU1NDVlMzUzY2I3MWMyZjhjOGUzMTRhM2MzODlmMTEzNmZiZda4AhE=: 00:12:16.648 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.648 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:16.648 08:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:16.648 08:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.648 08:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:16.648 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:16.648 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:16.648 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:16.648 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:16.648 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:12:16.648 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:16.648 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:16.648 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:16.648 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:16.648 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.648 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.648 08:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:16.648 08:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.648 08:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:16.648 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.648 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.227 00:12:17.227 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:17.227 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:17.227 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.486 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.486 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.486 08:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:17.486 08:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.486 08:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:17.486 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:17.486 { 00:12:17.486 "cntlid": 81, 00:12:17.486 "qid": 0, 00:12:17.486 "state": "enabled", 00:12:17.486 "listen_address": { 00:12:17.486 "trtype": "TCP", 00:12:17.486 "adrfam": "IPv4", 00:12:17.486 "traddr": "10.0.0.2", 00:12:17.486 "trsvcid": "4420" 00:12:17.486 }, 00:12:17.486 "peer_address": { 00:12:17.486 "trtype": "TCP", 00:12:17.486 "adrfam": "IPv4", 00:12:17.486 "traddr": "10.0.0.1", 00:12:17.486 "trsvcid": "45592" 00:12:17.486 }, 00:12:17.486 "auth": { 00:12:17.486 "state": "completed", 00:12:17.486 "digest": "sha384", 00:12:17.486 "dhgroup": "ffdhe6144" 00:12:17.486 } 00:12:17.486 } 00:12:17.486 ]' 00:12:17.486 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:17.486 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:17.486 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:17.486 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:17.486 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:17.744 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.744 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.744 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.003 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:00:ZTI0ZGQzMDBhZmM1NDU0OTRiNTc3ODU5MDE3Y2MwZDljYjczYmNmOWYwYzhlMjQ3k5VyGw==: --dhchap-ctrl-secret DHHC-1:03:Nzc0Y2I2NGMzM2E4MTg2MGVkZjgwODg0OGFjZTUzNDFhMzQ1NGNlOWFhOTNiZmRhYzNhN2NiNjllMjEzNTYwMn6RTFs=: 00:12:18.570 08:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.570 08:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:18.570 08:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:18.570 08:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.570 08:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:18.570 08:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:18.570 08:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:18.570 08:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:18.831 08:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:12:18.831 08:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:18.831 08:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:18.831 08:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:18.831 08:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:18.831 08:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.831 08:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.831 08:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:18.831 08:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.831 08:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:18.831 08:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.831 08:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.397 00:12:19.397 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:19.397 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:19.397 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.655 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.655 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.655 08:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.655 08:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.655 08:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.656 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:19.656 { 00:12:19.656 "cntlid": 83, 00:12:19.656 "qid": 0, 00:12:19.656 "state": "enabled", 00:12:19.656 "listen_address": { 00:12:19.656 "trtype": "TCP", 00:12:19.656 "adrfam": "IPv4", 00:12:19.656 "traddr": "10.0.0.2", 00:12:19.656 "trsvcid": "4420" 00:12:19.656 }, 00:12:19.656 "peer_address": { 00:12:19.656 "trtype": "TCP", 00:12:19.656 "adrfam": "IPv4", 00:12:19.656 "traddr": "10.0.0.1", 00:12:19.656 "trsvcid": "55862" 00:12:19.656 }, 00:12:19.656 "auth": { 00:12:19.656 "state": "completed", 00:12:19.656 "digest": "sha384", 00:12:19.656 "dhgroup": "ffdhe6144" 00:12:19.656 } 00:12:19.656 } 00:12:19.656 ]' 00:12:19.656 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:19.656 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:19.656 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:19.656 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:19.656 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:19.656 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.656 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.656 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.914 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:01:MTUzNTViZDI4M2M4ODIxN2NkOTcyNzg5YTlkMDU5YTQcxddQ: --dhchap-ctrl-secret DHHC-1:02:OTQzNjliN2NhOGUxMGUzMGQ2ZDlkNDI4MWE3ZGZkYjZlOGE2M2Y1NjZhNzBmOTNkFzMkSg==: 00:12:20.851 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.851 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:20.851 08:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:20.851 08:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.851 08:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:20.851 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:20.851 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:20.851 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:20.851 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:12:20.851 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:20.851 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:20.851 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:20.851 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:20.851 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.851 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.851 08:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:20.851 08:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.851 08:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:20.851 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.851 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.418 00:12:21.418 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:21.418 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:21.418 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.677 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.677 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.677 08:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.677 08:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.677 08:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.677 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:21.677 { 00:12:21.677 "cntlid": 85, 00:12:21.677 "qid": 0, 00:12:21.677 "state": "enabled", 00:12:21.677 "listen_address": { 00:12:21.677 "trtype": "TCP", 00:12:21.677 "adrfam": "IPv4", 00:12:21.677 "traddr": "10.0.0.2", 00:12:21.677 "trsvcid": "4420" 00:12:21.677 }, 00:12:21.677 "peer_address": { 00:12:21.677 "trtype": "TCP", 00:12:21.677 "adrfam": "IPv4", 00:12:21.677 "traddr": "10.0.0.1", 00:12:21.677 "trsvcid": "55878" 00:12:21.677 }, 00:12:21.677 "auth": { 00:12:21.677 "state": "completed", 00:12:21.677 "digest": "sha384", 00:12:21.677 "dhgroup": "ffdhe6144" 00:12:21.677 } 00:12:21.677 } 00:12:21.677 ]' 00:12:21.677 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:21.677 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:21.677 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:21.677 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:21.677 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:21.936 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.936 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.936 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.195 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:02:N2UwNGUxZGJmOTEzMmM4OTBkYTMxZjEyMzg3NjMwN2MzNTg4MDczNTMxOGFiYzQ3tnOQRw==: --dhchap-ctrl-secret DHHC-1:01:NTNlYzJjNjdjZjIzM2VkNjMxYjc5MmU0MGNkYjY1NjZ5rYSD: 00:12:22.763 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.763 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:22.763 08:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:22.763 08:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.763 08:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:22.763 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:22.763 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:22.763 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:23.021 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:12:23.021 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:23.021 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:23.021 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:23.021 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:23.021 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.021 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key3 00:12:23.021 08:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:23.021 08:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.021 08:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:23.021 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:23.021 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:23.587 00:12:23.587 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:23.587 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:23.587 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.846 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.846 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.846 08:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:23.846 08:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.846 08:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:23.846 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:23.846 { 00:12:23.846 "cntlid": 87, 00:12:23.846 "qid": 0, 00:12:23.846 "state": "enabled", 00:12:23.846 "listen_address": { 00:12:23.846 "trtype": "TCP", 00:12:23.846 "adrfam": "IPv4", 00:12:23.846 "traddr": "10.0.0.2", 00:12:23.846 "trsvcid": "4420" 00:12:23.846 }, 00:12:23.846 "peer_address": { 00:12:23.846 "trtype": "TCP", 00:12:23.846 "adrfam": "IPv4", 00:12:23.846 "traddr": "10.0.0.1", 00:12:23.846 "trsvcid": "55912" 00:12:23.846 }, 00:12:23.846 "auth": { 00:12:23.846 "state": "completed", 00:12:23.846 "digest": "sha384", 00:12:23.846 "dhgroup": "ffdhe6144" 00:12:23.846 } 00:12:23.846 } 00:12:23.846 ]' 00:12:23.846 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:23.846 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:23.846 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:24.105 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:24.105 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:24.105 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.105 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.105 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.364 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:03:MWYxMDYxNTQwNTMyMDZlZjhmMDM5ODE2OTgxOGU1NDVlMzUzY2I3MWMyZjhjOGUzMTRhM2MzODlmMTEzNmZiZda4AhE=: 00:12:24.931 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.931 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:24.931 08:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:24.931 08:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.931 08:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:24.931 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:24.931 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:24.931 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:24.931 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:25.190 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:12:25.190 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:25.190 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:25.190 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:25.190 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:25.190 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.190 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.190 08:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:25.190 08:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.190 08:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:25.190 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.190 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.758 00:12:25.758 08:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:25.758 08:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:25.758 08:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.017 08:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.017 08:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.017 08:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:26.017 08:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.276 08:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:26.276 08:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:26.276 { 00:12:26.276 "cntlid": 89, 00:12:26.276 "qid": 0, 00:12:26.276 "state": "enabled", 00:12:26.276 "listen_address": { 00:12:26.276 "trtype": "TCP", 00:12:26.276 "adrfam": "IPv4", 00:12:26.276 "traddr": "10.0.0.2", 00:12:26.276 "trsvcid": "4420" 00:12:26.276 }, 00:12:26.276 "peer_address": { 00:12:26.276 "trtype": "TCP", 00:12:26.276 "adrfam": "IPv4", 00:12:26.276 "traddr": "10.0.0.1", 00:12:26.276 "trsvcid": "55938" 00:12:26.276 }, 00:12:26.276 "auth": { 00:12:26.276 "state": "completed", 00:12:26.276 "digest": "sha384", 00:12:26.276 "dhgroup": "ffdhe8192" 00:12:26.276 } 00:12:26.276 } 00:12:26.276 ]' 00:12:26.276 08:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:26.276 08:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:26.276 08:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:26.276 08:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:26.276 08:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:26.276 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.276 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.277 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.535 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:00:ZTI0ZGQzMDBhZmM1NDU0OTRiNTc3ODU5MDE3Y2MwZDljYjczYmNmOWYwYzhlMjQ3k5VyGw==: --dhchap-ctrl-secret DHHC-1:03:Nzc0Y2I2NGMzM2E4MTg2MGVkZjgwODg0OGFjZTUzNDFhMzQ1NGNlOWFhOTNiZmRhYzNhN2NiNjllMjEzNTYwMn6RTFs=: 00:12:27.472 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.472 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:27.472 08:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:27.472 08:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.472 08:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:27.472 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:27.472 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:27.472 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:27.472 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:12:27.472 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:27.472 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:27.472 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:27.472 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:27.472 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.472 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.472 08:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:27.472 08:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.472 08:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:27.472 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.472 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.038 00:12:28.038 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:28.038 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:28.038 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.297 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.556 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.556 08:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:28.556 08:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.556 08:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:28.556 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:28.556 { 00:12:28.556 "cntlid": 91, 00:12:28.556 "qid": 0, 00:12:28.556 "state": "enabled", 00:12:28.556 "listen_address": { 00:12:28.556 "trtype": "TCP", 00:12:28.556 "adrfam": "IPv4", 00:12:28.557 "traddr": "10.0.0.2", 00:12:28.557 "trsvcid": "4420" 00:12:28.557 }, 00:12:28.557 "peer_address": { 00:12:28.557 "trtype": "TCP", 00:12:28.557 "adrfam": "IPv4", 00:12:28.557 "traddr": "10.0.0.1", 00:12:28.557 "trsvcid": "58570" 00:12:28.557 }, 00:12:28.557 "auth": { 00:12:28.557 "state": "completed", 00:12:28.557 "digest": "sha384", 00:12:28.557 "dhgroup": "ffdhe8192" 00:12:28.557 } 00:12:28.557 } 00:12:28.557 ]' 00:12:28.557 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:28.557 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:28.557 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:28.557 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:28.557 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:28.557 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.557 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.557 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.815 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:01:MTUzNTViZDI4M2M4ODIxN2NkOTcyNzg5YTlkMDU5YTQcxddQ: --dhchap-ctrl-secret DHHC-1:02:OTQzNjliN2NhOGUxMGUzMGQ2ZDlkNDI4MWE3ZGZkYjZlOGE2M2Y1NjZhNzBmOTNkFzMkSg==: 00:12:29.752 08:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.752 08:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:29.752 08:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:29.752 08:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.752 08:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:29.752 08:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:29.752 08:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:29.752 08:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:29.752 08:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:12:29.752 08:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.752 08:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:29.752 08:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:29.752 08:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:29.752 08:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.752 08:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.752 08:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:29.752 08:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.752 08:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:29.752 08:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.752 08:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.689 00:12:30.689 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:30.689 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.689 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:30.689 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.689 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.689 08:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:30.689 08:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.689 08:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:30.689 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:30.689 { 00:12:30.689 "cntlid": 93, 00:12:30.689 "qid": 0, 00:12:30.689 "state": "enabled", 00:12:30.689 "listen_address": { 00:12:30.689 "trtype": "TCP", 00:12:30.689 "adrfam": "IPv4", 00:12:30.689 "traddr": "10.0.0.2", 00:12:30.689 "trsvcid": "4420" 00:12:30.689 }, 00:12:30.689 "peer_address": { 00:12:30.689 "trtype": "TCP", 00:12:30.689 "adrfam": "IPv4", 00:12:30.689 "traddr": "10.0.0.1", 00:12:30.689 "trsvcid": "58588" 00:12:30.689 }, 00:12:30.689 "auth": { 00:12:30.689 "state": "completed", 00:12:30.689 "digest": "sha384", 00:12:30.689 "dhgroup": "ffdhe8192" 00:12:30.689 } 00:12:30.689 } 00:12:30.689 ]' 00:12:30.689 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:30.949 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:30.949 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:30.949 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:30.949 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:30.949 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.949 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.949 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.209 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:02:N2UwNGUxZGJmOTEzMmM4OTBkYTMxZjEyMzg3NjMwN2MzNTg4MDczNTMxOGFiYzQ3tnOQRw==: --dhchap-ctrl-secret DHHC-1:01:NTNlYzJjNjdjZjIzM2VkNjMxYjc5MmU0MGNkYjY1NjZ5rYSD: 00:12:31.775 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.775 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:31.776 08:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.776 08:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.776 08:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.776 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:31.776 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:31.776 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:32.035 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:12:32.035 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:32.035 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:32.035 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:32.035 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:32.035 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.035 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key3 00:12:32.035 08:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.035 08:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.035 08:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.035 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:32.035 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:32.603 00:12:32.603 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:32.603 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:32.603 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.862 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.862 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.862 08:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.862 08:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.862 08:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.862 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:32.862 { 00:12:32.862 "cntlid": 95, 00:12:32.862 "qid": 0, 00:12:32.862 "state": "enabled", 00:12:32.862 "listen_address": { 00:12:32.862 "trtype": "TCP", 00:12:32.862 "adrfam": "IPv4", 00:12:32.862 "traddr": "10.0.0.2", 00:12:32.862 "trsvcid": "4420" 00:12:32.862 }, 00:12:32.862 "peer_address": { 00:12:32.862 "trtype": "TCP", 00:12:32.862 "adrfam": "IPv4", 00:12:32.862 "traddr": "10.0.0.1", 00:12:32.862 "trsvcid": "58630" 00:12:32.862 }, 00:12:32.862 "auth": { 00:12:32.862 "state": "completed", 00:12:32.862 "digest": "sha384", 00:12:32.862 "dhgroup": "ffdhe8192" 00:12:32.862 } 00:12:32.862 } 00:12:32.862 ]' 00:12:32.862 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:32.862 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:32.862 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:33.121 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:33.121 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:33.121 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.121 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.121 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.381 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:03:MWYxMDYxNTQwNTMyMDZlZjhmMDM5ODE2OTgxOGU1NDVlMzUzY2I3MWMyZjhjOGUzMTRhM2MzODlmMTEzNmZiZda4AhE=: 00:12:33.948 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.948 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:33.948 08:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:33.948 08:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.948 08:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:33.948 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:33.948 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:33.948 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:33.949 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:33.949 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:34.208 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:12:34.208 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:34.208 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:34.208 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:34.208 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:34.208 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.208 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.208 08:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:34.208 08:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.208 08:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:34.208 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.208 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.466 00:12:34.466 08:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:34.466 08:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.466 08:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:34.724 08:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.724 08:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.724 08:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:34.724 08:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.724 08:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:34.724 08:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:34.724 { 00:12:34.724 "cntlid": 97, 00:12:34.724 "qid": 0, 00:12:34.724 "state": "enabled", 00:12:34.724 "listen_address": { 00:12:34.724 "trtype": "TCP", 00:12:34.724 "adrfam": "IPv4", 00:12:34.724 "traddr": "10.0.0.2", 00:12:34.724 "trsvcid": "4420" 00:12:34.724 }, 00:12:34.724 "peer_address": { 00:12:34.724 "trtype": "TCP", 00:12:34.724 "adrfam": "IPv4", 00:12:34.724 "traddr": "10.0.0.1", 00:12:34.724 "trsvcid": "58650" 00:12:34.724 }, 00:12:34.724 "auth": { 00:12:34.724 "state": "completed", 00:12:34.724 "digest": "sha512", 00:12:34.725 "dhgroup": "null" 00:12:34.725 } 00:12:34.725 } 00:12:34.725 ]' 00:12:34.725 08:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:34.984 08:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:34.984 08:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:34.984 08:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:34.984 08:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:34.984 08:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.984 08:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.984 08:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.242 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:00:ZTI0ZGQzMDBhZmM1NDU0OTRiNTc3ODU5MDE3Y2MwZDljYjczYmNmOWYwYzhlMjQ3k5VyGw==: --dhchap-ctrl-secret DHHC-1:03:Nzc0Y2I2NGMzM2E4MTg2MGVkZjgwODg0OGFjZTUzNDFhMzQ1NGNlOWFhOTNiZmRhYzNhN2NiNjllMjEzNTYwMn6RTFs=: 00:12:36.177 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.177 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:36.177 08:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:36.177 08:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.177 08:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:36.177 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:36.177 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:36.177 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:36.436 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:12:36.436 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:36.436 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:36.436 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:36.436 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:36.436 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.436 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.436 08:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:36.436 08:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.436 08:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:36.436 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.436 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.695 00:12:36.695 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:36.695 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.695 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:36.954 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.954 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.954 08:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:36.954 08:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.954 08:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:36.954 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:36.954 { 00:12:36.954 "cntlid": 99, 00:12:36.954 "qid": 0, 00:12:36.954 "state": "enabled", 00:12:36.954 "listen_address": { 00:12:36.954 "trtype": "TCP", 00:12:36.954 "adrfam": "IPv4", 00:12:36.954 "traddr": "10.0.0.2", 00:12:36.954 "trsvcid": "4420" 00:12:36.954 }, 00:12:36.954 "peer_address": { 00:12:36.954 "trtype": "TCP", 00:12:36.954 "adrfam": "IPv4", 00:12:36.954 "traddr": "10.0.0.1", 00:12:36.954 "trsvcid": "58678" 00:12:36.954 }, 00:12:36.954 "auth": { 00:12:36.954 "state": "completed", 00:12:36.954 "digest": "sha512", 00:12:36.954 "dhgroup": "null" 00:12:36.954 } 00:12:36.954 } 00:12:36.954 ]' 00:12:36.954 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:36.954 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:36.954 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:36.954 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:36.954 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:36.954 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.954 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.954 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.214 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:01:MTUzNTViZDI4M2M4ODIxN2NkOTcyNzg5YTlkMDU5YTQcxddQ: --dhchap-ctrl-secret DHHC-1:02:OTQzNjliN2NhOGUxMGUzMGQ2ZDlkNDI4MWE3ZGZkYjZlOGE2M2Y1NjZhNzBmOTNkFzMkSg==: 00:12:38.150 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.150 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:38.150 08:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:38.150 08:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.150 08:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:38.150 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:38.150 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:38.150 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:38.420 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:12:38.420 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:38.420 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:38.420 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:38.420 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:38.420 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.420 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.420 08:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:38.420 08:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.420 08:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:38.420 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.420 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.684 00:12:38.684 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:38.684 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:38.684 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.942 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.942 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.942 08:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:38.942 08:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.942 08:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:38.942 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:38.942 { 00:12:38.942 "cntlid": 101, 00:12:38.942 "qid": 0, 00:12:38.942 "state": "enabled", 00:12:38.942 "listen_address": { 00:12:38.942 "trtype": "TCP", 00:12:38.942 "adrfam": "IPv4", 00:12:38.942 "traddr": "10.0.0.2", 00:12:38.942 "trsvcid": "4420" 00:12:38.942 }, 00:12:38.942 "peer_address": { 00:12:38.942 "trtype": "TCP", 00:12:38.942 "adrfam": "IPv4", 00:12:38.942 "traddr": "10.0.0.1", 00:12:38.942 "trsvcid": "53368" 00:12:38.942 }, 00:12:38.942 "auth": { 00:12:38.942 "state": "completed", 00:12:38.942 "digest": "sha512", 00:12:38.942 "dhgroup": "null" 00:12:38.942 } 00:12:38.942 } 00:12:38.942 ]' 00:12:38.942 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:38.942 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:38.942 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:39.201 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:39.201 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:39.201 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.201 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.201 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.460 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:02:N2UwNGUxZGJmOTEzMmM4OTBkYTMxZjEyMzg3NjMwN2MzNTg4MDczNTMxOGFiYzQ3tnOQRw==: --dhchap-ctrl-secret DHHC-1:01:NTNlYzJjNjdjZjIzM2VkNjMxYjc5MmU0MGNkYjY1NjZ5rYSD: 00:12:40.027 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.027 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:40.027 08:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:40.027 08:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.027 08:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:40.027 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:40.027 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:40.027 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:40.285 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:12:40.285 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:40.285 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:40.285 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:40.285 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:40.285 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.285 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key3 00:12:40.285 08:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:40.285 08:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.286 08:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:40.286 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:40.286 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:40.543 00:12:40.543 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:40.543 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:40.543 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.802 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.802 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.802 08:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:40.802 08:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.802 08:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:40.802 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:40.802 { 00:12:40.802 "cntlid": 103, 00:12:40.802 "qid": 0, 00:12:40.802 "state": "enabled", 00:12:40.802 "listen_address": { 00:12:40.802 "trtype": "TCP", 00:12:40.802 "adrfam": "IPv4", 00:12:40.802 "traddr": "10.0.0.2", 00:12:40.802 "trsvcid": "4420" 00:12:40.802 }, 00:12:40.802 "peer_address": { 00:12:40.802 "trtype": "TCP", 00:12:40.802 "adrfam": "IPv4", 00:12:40.802 "traddr": "10.0.0.1", 00:12:40.802 "trsvcid": "53382" 00:12:40.802 }, 00:12:40.802 "auth": { 00:12:40.802 "state": "completed", 00:12:40.802 "digest": "sha512", 00:12:40.802 "dhgroup": "null" 00:12:40.802 } 00:12:40.802 } 00:12:40.802 ]' 00:12:40.802 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:41.061 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:41.061 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:41.061 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:41.061 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:41.061 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.061 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.061 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.318 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:03:MWYxMDYxNTQwNTMyMDZlZjhmMDM5ODE2OTgxOGU1NDVlMzUzY2I3MWMyZjhjOGUzMTRhM2MzODlmMTEzNmZiZda4AhE=: 00:12:41.886 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.886 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:41.886 08:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.886 08:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.886 08:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.886 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:41.886 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:41.886 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:41.886 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:42.144 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:12:42.144 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:42.144 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:42.145 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:42.145 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:42.145 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.145 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.145 08:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:42.145 08:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.145 08:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:42.145 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.145 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.711 00:12:42.711 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:42.711 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:42.711 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.021 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.021 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.021 08:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:43.021 08:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.021 08:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:43.021 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:43.021 { 00:12:43.021 "cntlid": 105, 00:12:43.021 "qid": 0, 00:12:43.021 "state": "enabled", 00:12:43.021 "listen_address": { 00:12:43.021 "trtype": "TCP", 00:12:43.021 "adrfam": "IPv4", 00:12:43.021 "traddr": "10.0.0.2", 00:12:43.021 "trsvcid": "4420" 00:12:43.021 }, 00:12:43.021 "peer_address": { 00:12:43.021 "trtype": "TCP", 00:12:43.021 "adrfam": "IPv4", 00:12:43.021 "traddr": "10.0.0.1", 00:12:43.021 "trsvcid": "53408" 00:12:43.021 }, 00:12:43.021 "auth": { 00:12:43.021 "state": "completed", 00:12:43.021 "digest": "sha512", 00:12:43.021 "dhgroup": "ffdhe2048" 00:12:43.021 } 00:12:43.021 } 00:12:43.021 ]' 00:12:43.021 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:43.021 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:43.021 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:43.021 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:43.021 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:43.021 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.021 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.021 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.287 08:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:00:ZTI0ZGQzMDBhZmM1NDU0OTRiNTc3ODU5MDE3Y2MwZDljYjczYmNmOWYwYzhlMjQ3k5VyGw==: --dhchap-ctrl-secret DHHC-1:03:Nzc0Y2I2NGMzM2E4MTg2MGVkZjgwODg0OGFjZTUzNDFhMzQ1NGNlOWFhOTNiZmRhYzNhN2NiNjllMjEzNTYwMn6RTFs=: 00:12:44.223 08:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.223 08:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:44.223 08:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.223 08:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.223 08:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.223 08:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:44.223 08:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:44.223 08:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:44.223 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:12:44.223 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:44.223 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:44.223 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:44.223 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:44.223 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.223 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.223 08:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.223 08:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.223 08:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.223 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.223 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.790 00:12:44.790 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:44.790 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.790 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.049 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.049 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.049 08:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:45.049 08:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.049 08:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:45.049 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:45.049 { 00:12:45.049 "cntlid": 107, 00:12:45.049 "qid": 0, 00:12:45.049 "state": "enabled", 00:12:45.049 "listen_address": { 00:12:45.049 "trtype": "TCP", 00:12:45.049 "adrfam": "IPv4", 00:12:45.049 "traddr": "10.0.0.2", 00:12:45.049 "trsvcid": "4420" 00:12:45.049 }, 00:12:45.049 "peer_address": { 00:12:45.049 "trtype": "TCP", 00:12:45.049 "adrfam": "IPv4", 00:12:45.049 "traddr": "10.0.0.1", 00:12:45.049 "trsvcid": "53448" 00:12:45.049 }, 00:12:45.049 "auth": { 00:12:45.049 "state": "completed", 00:12:45.049 "digest": "sha512", 00:12:45.049 "dhgroup": "ffdhe2048" 00:12:45.049 } 00:12:45.049 } 00:12:45.049 ]' 00:12:45.049 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:45.049 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:45.049 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:45.049 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:45.049 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:45.049 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.049 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.049 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.307 08:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:01:MTUzNTViZDI4M2M4ODIxN2NkOTcyNzg5YTlkMDU5YTQcxddQ: --dhchap-ctrl-secret DHHC-1:02:OTQzNjliN2NhOGUxMGUzMGQ2ZDlkNDI4MWE3ZGZkYjZlOGE2M2Y1NjZhNzBmOTNkFzMkSg==: 00:12:45.873 08:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.874 08:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:45.874 08:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:45.874 08:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.132 08:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.132 08:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:46.132 08:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:46.132 08:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:46.132 08:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:12:46.132 08:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:46.132 08:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:46.132 08:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:46.132 08:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:46.132 08:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.132 08:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.132 08:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.132 08:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.392 08:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.392 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.392 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.650 00:12:46.650 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:46.650 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.650 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:46.909 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.909 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.909 08:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.909 08:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.909 08:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.909 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:46.909 { 00:12:46.909 "cntlid": 109, 00:12:46.909 "qid": 0, 00:12:46.909 "state": "enabled", 00:12:46.909 "listen_address": { 00:12:46.909 "trtype": "TCP", 00:12:46.909 "adrfam": "IPv4", 00:12:46.909 "traddr": "10.0.0.2", 00:12:46.909 "trsvcid": "4420" 00:12:46.909 }, 00:12:46.909 "peer_address": { 00:12:46.909 "trtype": "TCP", 00:12:46.909 "adrfam": "IPv4", 00:12:46.909 "traddr": "10.0.0.1", 00:12:46.909 "trsvcid": "53466" 00:12:46.909 }, 00:12:46.909 "auth": { 00:12:46.909 "state": "completed", 00:12:46.909 "digest": "sha512", 00:12:46.909 "dhgroup": "ffdhe2048" 00:12:46.909 } 00:12:46.909 } 00:12:46.909 ]' 00:12:46.909 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:46.909 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:46.909 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:46.909 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:46.909 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:46.909 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.909 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.909 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.168 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:02:N2UwNGUxZGJmOTEzMmM4OTBkYTMxZjEyMzg3NjMwN2MzNTg4MDczNTMxOGFiYzQ3tnOQRw==: --dhchap-ctrl-secret DHHC-1:01:NTNlYzJjNjdjZjIzM2VkNjMxYjc5MmU0MGNkYjY1NjZ5rYSD: 00:12:48.104 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.104 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:48.104 08:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:48.104 08:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.104 08:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:48.104 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:48.104 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:48.104 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:48.104 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:12:48.104 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:48.104 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:48.104 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:48.104 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:48.104 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.104 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key3 00:12:48.104 08:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:48.104 08:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.104 08:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:48.104 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:48.104 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:48.364 00:12:48.364 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:48.364 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:48.364 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.622 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.622 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.622 08:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:48.623 08:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.623 08:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:48.623 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:48.623 { 00:12:48.623 "cntlid": 111, 00:12:48.623 "qid": 0, 00:12:48.623 "state": "enabled", 00:12:48.623 "listen_address": { 00:12:48.623 "trtype": "TCP", 00:12:48.623 "adrfam": "IPv4", 00:12:48.623 "traddr": "10.0.0.2", 00:12:48.623 "trsvcid": "4420" 00:12:48.623 }, 00:12:48.623 "peer_address": { 00:12:48.623 "trtype": "TCP", 00:12:48.623 "adrfam": "IPv4", 00:12:48.623 "traddr": "10.0.0.1", 00:12:48.623 "trsvcid": "51316" 00:12:48.623 }, 00:12:48.623 "auth": { 00:12:48.623 "state": "completed", 00:12:48.623 "digest": "sha512", 00:12:48.623 "dhgroup": "ffdhe2048" 00:12:48.623 } 00:12:48.623 } 00:12:48.623 ]' 00:12:48.623 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:48.623 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:48.623 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:48.623 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:48.623 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:48.881 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.881 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.881 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.139 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:03:MWYxMDYxNTQwNTMyMDZlZjhmMDM5ODE2OTgxOGU1NDVlMzUzY2I3MWMyZjhjOGUzMTRhM2MzODlmMTEzNmZiZda4AhE=: 00:12:49.708 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.708 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:49.708 08:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:49.708 08:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.708 08:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:49.708 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:49.708 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:49.708 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:49.708 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:49.967 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:49.967 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:49.967 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:49.967 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:49.967 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:49.967 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.967 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.967 08:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:49.967 08:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.967 08:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:49.967 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.967 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.226 00:12:50.226 08:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:50.226 08:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:50.226 08:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.484 08:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.484 08:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.484 08:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:50.484 08:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.484 08:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:50.484 08:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:50.484 { 00:12:50.484 "cntlid": 113, 00:12:50.484 "qid": 0, 00:12:50.484 "state": "enabled", 00:12:50.484 "listen_address": { 00:12:50.484 "trtype": "TCP", 00:12:50.484 "adrfam": "IPv4", 00:12:50.484 "traddr": "10.0.0.2", 00:12:50.484 "trsvcid": "4420" 00:12:50.484 }, 00:12:50.484 "peer_address": { 00:12:50.484 "trtype": "TCP", 00:12:50.484 "adrfam": "IPv4", 00:12:50.484 "traddr": "10.0.0.1", 00:12:50.484 "trsvcid": "51340" 00:12:50.484 }, 00:12:50.484 "auth": { 00:12:50.484 "state": "completed", 00:12:50.484 "digest": "sha512", 00:12:50.484 "dhgroup": "ffdhe3072" 00:12:50.484 } 00:12:50.484 } 00:12:50.484 ]' 00:12:50.484 08:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:50.743 08:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:50.743 08:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:50.743 08:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:50.743 08:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:50.743 08:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.743 08:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.743 08:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.001 08:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:00:ZTI0ZGQzMDBhZmM1NDU0OTRiNTc3ODU5MDE3Y2MwZDljYjczYmNmOWYwYzhlMjQ3k5VyGw==: --dhchap-ctrl-secret DHHC-1:03:Nzc0Y2I2NGMzM2E4MTg2MGVkZjgwODg0OGFjZTUzNDFhMzQ1NGNlOWFhOTNiZmRhYzNhN2NiNjllMjEzNTYwMn6RTFs=: 00:12:51.937 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.937 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:51.937 08:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:51.937 08:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.937 08:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:51.937 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:51.937 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:51.937 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:51.937 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:51.937 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:51.937 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:51.937 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:51.937 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:51.937 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.937 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.937 08:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:51.937 08:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.937 08:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:51.937 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.937 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.506 00:12:52.506 08:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:52.506 08:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:52.506 08:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.506 08:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.506 08:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.506 08:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:52.506 08:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.506 08:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:52.506 08:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:52.506 { 00:12:52.506 "cntlid": 115, 00:12:52.506 "qid": 0, 00:12:52.506 "state": "enabled", 00:12:52.506 "listen_address": { 00:12:52.506 "trtype": "TCP", 00:12:52.506 "adrfam": "IPv4", 00:12:52.506 "traddr": "10.0.0.2", 00:12:52.506 "trsvcid": "4420" 00:12:52.506 }, 00:12:52.506 "peer_address": { 00:12:52.506 "trtype": "TCP", 00:12:52.506 "adrfam": "IPv4", 00:12:52.506 "traddr": "10.0.0.1", 00:12:52.506 "trsvcid": "51384" 00:12:52.506 }, 00:12:52.506 "auth": { 00:12:52.506 "state": "completed", 00:12:52.506 "digest": "sha512", 00:12:52.506 "dhgroup": "ffdhe3072" 00:12:52.506 } 00:12:52.506 } 00:12:52.506 ]' 00:12:52.506 08:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:52.764 08:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:52.764 08:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:52.764 08:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:52.764 08:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:52.764 08:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.764 08:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.764 08:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.023 08:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:01:MTUzNTViZDI4M2M4ODIxN2NkOTcyNzg5YTlkMDU5YTQcxddQ: --dhchap-ctrl-secret DHHC-1:02:OTQzNjliN2NhOGUxMGUzMGQ2ZDlkNDI4MWE3ZGZkYjZlOGE2M2Y1NjZhNzBmOTNkFzMkSg==: 00:12:53.589 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.589 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:53.589 08:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.589 08:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.589 08:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.589 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:53.589 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:53.589 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:53.848 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:12:53.848 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:53.848 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:53.848 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:53.848 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:53.848 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.848 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.848 08:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.848 08:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.848 08:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.848 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.848 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.415 00:12:54.415 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:54.415 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:54.415 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.415 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.415 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.415 08:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:54.415 08:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.415 08:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:54.415 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:54.415 { 00:12:54.415 "cntlid": 117, 00:12:54.415 "qid": 0, 00:12:54.415 "state": "enabled", 00:12:54.415 "listen_address": { 00:12:54.415 "trtype": "TCP", 00:12:54.415 "adrfam": "IPv4", 00:12:54.415 "traddr": "10.0.0.2", 00:12:54.415 "trsvcid": "4420" 00:12:54.415 }, 00:12:54.415 "peer_address": { 00:12:54.415 "trtype": "TCP", 00:12:54.415 "adrfam": "IPv4", 00:12:54.415 "traddr": "10.0.0.1", 00:12:54.415 "trsvcid": "51400" 00:12:54.415 }, 00:12:54.415 "auth": { 00:12:54.415 "state": "completed", 00:12:54.415 "digest": "sha512", 00:12:54.415 "dhgroup": "ffdhe3072" 00:12:54.415 } 00:12:54.415 } 00:12:54.415 ]' 00:12:54.415 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:54.674 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:54.674 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:54.674 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:54.674 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:54.674 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.674 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.674 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.933 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:02:N2UwNGUxZGJmOTEzMmM4OTBkYTMxZjEyMzg3NjMwN2MzNTg4MDczNTMxOGFiYzQ3tnOQRw==: --dhchap-ctrl-secret DHHC-1:01:NTNlYzJjNjdjZjIzM2VkNjMxYjc5MmU0MGNkYjY1NjZ5rYSD: 00:12:55.499 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.499 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:55.499 08:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:55.499 08:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.499 08:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:55.499 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:55.499 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:55.499 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:55.757 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:12:55.757 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:55.757 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:55.757 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:55.757 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:55.757 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.757 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key3 00:12:55.757 08:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:55.757 08:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.757 08:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:55.757 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:55.757 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:56.324 00:12:56.324 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:56.324 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:56.324 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.583 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.583 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.583 08:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:56.583 08:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.583 08:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:56.583 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:56.583 { 00:12:56.583 "cntlid": 119, 00:12:56.583 "qid": 0, 00:12:56.583 "state": "enabled", 00:12:56.583 "listen_address": { 00:12:56.583 "trtype": "TCP", 00:12:56.583 "adrfam": "IPv4", 00:12:56.583 "traddr": "10.0.0.2", 00:12:56.583 "trsvcid": "4420" 00:12:56.583 }, 00:12:56.583 "peer_address": { 00:12:56.583 "trtype": "TCP", 00:12:56.583 "adrfam": "IPv4", 00:12:56.583 "traddr": "10.0.0.1", 00:12:56.583 "trsvcid": "51420" 00:12:56.583 }, 00:12:56.583 "auth": { 00:12:56.583 "state": "completed", 00:12:56.583 "digest": "sha512", 00:12:56.583 "dhgroup": "ffdhe3072" 00:12:56.583 } 00:12:56.583 } 00:12:56.583 ]' 00:12:56.583 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:56.583 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:56.583 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:56.583 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:56.583 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:56.583 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.583 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.583 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.841 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:03:MWYxMDYxNTQwNTMyMDZlZjhmMDM5ODE2OTgxOGU1NDVlMzUzY2I3MWMyZjhjOGUzMTRhM2MzODlmMTEzNmZiZda4AhE=: 00:12:57.777 08:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.777 08:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:57.777 08:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:57.777 08:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.777 08:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:57.777 08:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:57.777 08:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:57.777 08:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:57.777 08:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:57.777 08:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:12:57.777 08:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:57.777 08:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:57.777 08:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:57.777 08:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:57.777 08:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.777 08:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:57.777 08:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:57.777 08:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.777 08:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:57.777 08:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:57.777 08:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.345 00:12:58.345 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:58.345 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:58.345 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.604 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.604 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.604 08:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:58.604 08:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.604 08:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:58.604 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:58.604 { 00:12:58.604 "cntlid": 121, 00:12:58.604 "qid": 0, 00:12:58.604 "state": "enabled", 00:12:58.604 "listen_address": { 00:12:58.604 "trtype": "TCP", 00:12:58.604 "adrfam": "IPv4", 00:12:58.604 "traddr": "10.0.0.2", 00:12:58.604 "trsvcid": "4420" 00:12:58.604 }, 00:12:58.604 "peer_address": { 00:12:58.604 "trtype": "TCP", 00:12:58.604 "adrfam": "IPv4", 00:12:58.604 "traddr": "10.0.0.1", 00:12:58.604 "trsvcid": "37562" 00:12:58.604 }, 00:12:58.604 "auth": { 00:12:58.604 "state": "completed", 00:12:58.604 "digest": "sha512", 00:12:58.604 "dhgroup": "ffdhe4096" 00:12:58.604 } 00:12:58.604 } 00:12:58.604 ]' 00:12:58.604 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:58.604 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:58.604 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:58.604 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:58.604 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:58.604 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.604 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.604 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.171 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:00:ZTI0ZGQzMDBhZmM1NDU0OTRiNTc3ODU5MDE3Y2MwZDljYjczYmNmOWYwYzhlMjQ3k5VyGw==: --dhchap-ctrl-secret DHHC-1:03:Nzc0Y2I2NGMzM2E4MTg2MGVkZjgwODg0OGFjZTUzNDFhMzQ1NGNlOWFhOTNiZmRhYzNhN2NiNjllMjEzNTYwMn6RTFs=: 00:12:59.738 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.738 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:12:59.738 08:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:59.738 08:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.738 08:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:59.738 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:59.738 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:59.738 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:59.996 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:12:59.996 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:59.996 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:59.996 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:59.996 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:59.996 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.996 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.996 08:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:59.996 08:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.996 08:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:59.996 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.996 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.254 00:13:00.254 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:00.254 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.254 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:00.512 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.512 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.512 08:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:00.512 08:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.512 08:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:00.512 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:00.512 { 00:13:00.512 "cntlid": 123, 00:13:00.512 "qid": 0, 00:13:00.512 "state": "enabled", 00:13:00.512 "listen_address": { 00:13:00.512 "trtype": "TCP", 00:13:00.512 "adrfam": "IPv4", 00:13:00.512 "traddr": "10.0.0.2", 00:13:00.512 "trsvcid": "4420" 00:13:00.512 }, 00:13:00.512 "peer_address": { 00:13:00.512 "trtype": "TCP", 00:13:00.512 "adrfam": "IPv4", 00:13:00.512 "traddr": "10.0.0.1", 00:13:00.512 "trsvcid": "37594" 00:13:00.512 }, 00:13:00.512 "auth": { 00:13:00.512 "state": "completed", 00:13:00.512 "digest": "sha512", 00:13:00.512 "dhgroup": "ffdhe4096" 00:13:00.512 } 00:13:00.512 } 00:13:00.512 ]' 00:13:00.512 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:00.769 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:00.769 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:00.769 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:00.769 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:00.769 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.769 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.769 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.027 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:01:MTUzNTViZDI4M2M4ODIxN2NkOTcyNzg5YTlkMDU5YTQcxddQ: --dhchap-ctrl-secret DHHC-1:02:OTQzNjliN2NhOGUxMGUzMGQ2ZDlkNDI4MWE3ZGZkYjZlOGE2M2Y1NjZhNzBmOTNkFzMkSg==: 00:13:01.962 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.962 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:01.962 08:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:01.962 08:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.962 08:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:01.962 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:01.962 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:01.962 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:01.962 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:13:01.962 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:01.962 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:01.962 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:01.962 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:01.962 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.963 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.963 08:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:01.963 08:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.963 08:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:01.963 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.963 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:02.529 00:13:02.529 08:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:02.529 08:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.529 08:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:02.787 08:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.787 08:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.787 08:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:02.787 08:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.787 08:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:02.787 08:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:02.787 { 00:13:02.787 "cntlid": 125, 00:13:02.788 "qid": 0, 00:13:02.788 "state": "enabled", 00:13:02.788 "listen_address": { 00:13:02.788 "trtype": "TCP", 00:13:02.788 "adrfam": "IPv4", 00:13:02.788 "traddr": "10.0.0.2", 00:13:02.788 "trsvcid": "4420" 00:13:02.788 }, 00:13:02.788 "peer_address": { 00:13:02.788 "trtype": "TCP", 00:13:02.788 "adrfam": "IPv4", 00:13:02.788 "traddr": "10.0.0.1", 00:13:02.788 "trsvcid": "37620" 00:13:02.788 }, 00:13:02.788 "auth": { 00:13:02.788 "state": "completed", 00:13:02.788 "digest": "sha512", 00:13:02.788 "dhgroup": "ffdhe4096" 00:13:02.788 } 00:13:02.788 } 00:13:02.788 ]' 00:13:02.788 08:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:02.788 08:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:02.788 08:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:02.788 08:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:02.788 08:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:02.788 08:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.788 08:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.788 08:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.046 08:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:02:N2UwNGUxZGJmOTEzMmM4OTBkYTMxZjEyMzg3NjMwN2MzNTg4MDczNTMxOGFiYzQ3tnOQRw==: --dhchap-ctrl-secret DHHC-1:01:NTNlYzJjNjdjZjIzM2VkNjMxYjc5MmU0MGNkYjY1NjZ5rYSD: 00:13:03.614 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.614 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:03.614 08:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.614 08:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.614 08:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.614 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:03.614 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:03.614 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:03.872 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:13:03.872 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:03.872 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:03.872 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:03.872 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:03.872 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.872 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key3 00:13:03.872 08:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.872 08:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.130 08:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:04.130 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:04.130 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:04.389 00:13:04.389 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:04.389 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.389 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:04.648 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.648 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.648 08:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:04.648 08:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.648 08:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:04.648 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:04.648 { 00:13:04.648 "cntlid": 127, 00:13:04.648 "qid": 0, 00:13:04.648 "state": "enabled", 00:13:04.648 "listen_address": { 00:13:04.648 "trtype": "TCP", 00:13:04.648 "adrfam": "IPv4", 00:13:04.648 "traddr": "10.0.0.2", 00:13:04.648 "trsvcid": "4420" 00:13:04.648 }, 00:13:04.648 "peer_address": { 00:13:04.648 "trtype": "TCP", 00:13:04.648 "adrfam": "IPv4", 00:13:04.648 "traddr": "10.0.0.1", 00:13:04.648 "trsvcid": "37662" 00:13:04.648 }, 00:13:04.648 "auth": { 00:13:04.648 "state": "completed", 00:13:04.648 "digest": "sha512", 00:13:04.648 "dhgroup": "ffdhe4096" 00:13:04.648 } 00:13:04.648 } 00:13:04.648 ]' 00:13:04.648 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:04.648 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:04.648 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:04.906 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:04.906 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:04.906 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.906 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.906 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.165 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:03:MWYxMDYxNTQwNTMyMDZlZjhmMDM5ODE2OTgxOGU1NDVlMzUzY2I3MWMyZjhjOGUzMTRhM2MzODlmMTEzNmZiZda4AhE=: 00:13:05.734 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.734 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:05.734 08:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:05.734 08:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.734 08:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:05.734 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:05.734 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:05.734 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:05.734 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:05.993 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:13:05.993 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:05.993 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:05.993 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:05.993 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:05.993 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.993 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.993 08:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:05.993 08:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.993 08:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:05.993 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.993 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:06.560 00:13:06.560 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:06.560 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.560 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:06.819 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.819 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.819 08:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:06.819 08:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.819 08:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:06.819 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:06.819 { 00:13:06.819 "cntlid": 129, 00:13:06.819 "qid": 0, 00:13:06.819 "state": "enabled", 00:13:06.819 "listen_address": { 00:13:06.819 "trtype": "TCP", 00:13:06.819 "adrfam": "IPv4", 00:13:06.819 "traddr": "10.0.0.2", 00:13:06.819 "trsvcid": "4420" 00:13:06.819 }, 00:13:06.819 "peer_address": { 00:13:06.819 "trtype": "TCP", 00:13:06.819 "adrfam": "IPv4", 00:13:06.819 "traddr": "10.0.0.1", 00:13:06.819 "trsvcid": "37688" 00:13:06.819 }, 00:13:06.819 "auth": { 00:13:06.819 "state": "completed", 00:13:06.819 "digest": "sha512", 00:13:06.819 "dhgroup": "ffdhe6144" 00:13:06.819 } 00:13:06.819 } 00:13:06.819 ]' 00:13:06.819 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:06.819 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:06.819 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:06.819 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:06.819 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:06.819 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.819 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.819 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.078 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:00:ZTI0ZGQzMDBhZmM1NDU0OTRiNTc3ODU5MDE3Y2MwZDljYjczYmNmOWYwYzhlMjQ3k5VyGw==: --dhchap-ctrl-secret DHHC-1:03:Nzc0Y2I2NGMzM2E4MTg2MGVkZjgwODg0OGFjZTUzNDFhMzQ1NGNlOWFhOTNiZmRhYzNhN2NiNjllMjEzNTYwMn6RTFs=: 00:13:07.645 08:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.645 08:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:07.645 08:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.645 08:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.645 08:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.645 08:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:07.645 08:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:07.645 08:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:07.904 08:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:13:07.904 08:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:07.904 08:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:07.904 08:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:07.904 08:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:07.904 08:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.904 08:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:07.904 08:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.904 08:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.904 08:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.904 08:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:07.904 08:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.470 00:13:08.470 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:08.470 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.470 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:08.729 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.729 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.729 08:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:08.729 08:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.729 08:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:08.729 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:08.729 { 00:13:08.729 "cntlid": 131, 00:13:08.729 "qid": 0, 00:13:08.729 "state": "enabled", 00:13:08.729 "listen_address": { 00:13:08.729 "trtype": "TCP", 00:13:08.729 "adrfam": "IPv4", 00:13:08.729 "traddr": "10.0.0.2", 00:13:08.729 "trsvcid": "4420" 00:13:08.729 }, 00:13:08.729 "peer_address": { 00:13:08.729 "trtype": "TCP", 00:13:08.729 "adrfam": "IPv4", 00:13:08.729 "traddr": "10.0.0.1", 00:13:08.729 "trsvcid": "37918" 00:13:08.729 }, 00:13:08.729 "auth": { 00:13:08.729 "state": "completed", 00:13:08.729 "digest": "sha512", 00:13:08.729 "dhgroup": "ffdhe6144" 00:13:08.729 } 00:13:08.729 } 00:13:08.729 ]' 00:13:08.729 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:08.729 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:08.729 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:08.729 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:08.729 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:08.729 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.729 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.729 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.987 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:01:MTUzNTViZDI4M2M4ODIxN2NkOTcyNzg5YTlkMDU5YTQcxddQ: --dhchap-ctrl-secret DHHC-1:02:OTQzNjliN2NhOGUxMGUzMGQ2ZDlkNDI4MWE3ZGZkYjZlOGE2M2Y1NjZhNzBmOTNkFzMkSg==: 00:13:09.554 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.554 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:09.554 08:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.554 08:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.813 08:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.813 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:09.813 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:09.813 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:09.813 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:13:09.813 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:09.813 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:09.813 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:09.813 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:09.813 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.813 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:09.813 08:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.813 08:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.072 08:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.072 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.072 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.354 00:13:10.354 08:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:10.354 08:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:10.354 08:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.625 08:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.625 08:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.625 08:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.625 08:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.625 08:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.625 08:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:10.625 { 00:13:10.625 "cntlid": 133, 00:13:10.625 "qid": 0, 00:13:10.625 "state": "enabled", 00:13:10.625 "listen_address": { 00:13:10.625 "trtype": "TCP", 00:13:10.625 "adrfam": "IPv4", 00:13:10.625 "traddr": "10.0.0.2", 00:13:10.625 "trsvcid": "4420" 00:13:10.625 }, 00:13:10.625 "peer_address": { 00:13:10.625 "trtype": "TCP", 00:13:10.625 "adrfam": "IPv4", 00:13:10.625 "traddr": "10.0.0.1", 00:13:10.625 "trsvcid": "37948" 00:13:10.625 }, 00:13:10.625 "auth": { 00:13:10.625 "state": "completed", 00:13:10.625 "digest": "sha512", 00:13:10.625 "dhgroup": "ffdhe6144" 00:13:10.625 } 00:13:10.625 } 00:13:10.625 ]' 00:13:10.625 08:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:10.625 08:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:10.625 08:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:10.625 08:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:10.625 08:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:10.883 08:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.883 08:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.883 08:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.883 08:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:02:N2UwNGUxZGJmOTEzMmM4OTBkYTMxZjEyMzg3NjMwN2MzNTg4MDczNTMxOGFiYzQ3tnOQRw==: --dhchap-ctrl-secret DHHC-1:01:NTNlYzJjNjdjZjIzM2VkNjMxYjc5MmU0MGNkYjY1NjZ5rYSD: 00:13:11.820 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.820 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:11.820 08:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:11.820 08:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.820 08:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:11.820 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:11.820 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:11.820 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:12.078 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:13:12.078 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:12.078 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:12.078 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:12.078 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:12.078 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.078 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key3 00:13:12.078 08:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:12.078 08:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.078 08:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:12.078 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:12.078 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:12.336 00:13:12.336 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:12.336 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:12.336 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.595 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.595 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.595 08:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:12.595 08:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.595 08:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:12.595 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:12.595 { 00:13:12.595 "cntlid": 135, 00:13:12.595 "qid": 0, 00:13:12.595 "state": "enabled", 00:13:12.595 "listen_address": { 00:13:12.595 "trtype": "TCP", 00:13:12.595 "adrfam": "IPv4", 00:13:12.595 "traddr": "10.0.0.2", 00:13:12.595 "trsvcid": "4420" 00:13:12.595 }, 00:13:12.595 "peer_address": { 00:13:12.595 "trtype": "TCP", 00:13:12.595 "adrfam": "IPv4", 00:13:12.595 "traddr": "10.0.0.1", 00:13:12.595 "trsvcid": "37970" 00:13:12.595 }, 00:13:12.595 "auth": { 00:13:12.595 "state": "completed", 00:13:12.595 "digest": "sha512", 00:13:12.595 "dhgroup": "ffdhe6144" 00:13:12.595 } 00:13:12.595 } 00:13:12.595 ]' 00:13:12.854 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:12.854 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:12.854 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:12.854 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:12.854 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:12.854 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.854 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.854 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.113 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:03:MWYxMDYxNTQwNTMyMDZlZjhmMDM5ODE2OTgxOGU1NDVlMzUzY2I3MWMyZjhjOGUzMTRhM2MzODlmMTEzNmZiZda4AhE=: 00:13:13.681 08:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.681 08:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:13.681 08:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:13.681 08:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.681 08:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:13.681 08:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:13.681 08:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:13.681 08:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:13.681 08:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:14.263 08:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:13:14.263 08:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:14.263 08:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:14.263 08:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:14.263 08:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:14.263 08:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.263 08:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:14.263 08:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:14.263 08:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.263 08:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:14.263 08:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:14.263 08:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:14.544 00:13:14.831 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:14.831 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.831 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:14.831 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.831 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.831 08:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:14.831 08:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.111 08:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:15.111 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:15.111 { 00:13:15.111 "cntlid": 137, 00:13:15.111 "qid": 0, 00:13:15.111 "state": "enabled", 00:13:15.111 "listen_address": { 00:13:15.111 "trtype": "TCP", 00:13:15.111 "adrfam": "IPv4", 00:13:15.111 "traddr": "10.0.0.2", 00:13:15.111 "trsvcid": "4420" 00:13:15.111 }, 00:13:15.111 "peer_address": { 00:13:15.111 "trtype": "TCP", 00:13:15.111 "adrfam": "IPv4", 00:13:15.111 "traddr": "10.0.0.1", 00:13:15.111 "trsvcid": "37984" 00:13:15.111 }, 00:13:15.111 "auth": { 00:13:15.111 "state": "completed", 00:13:15.111 "digest": "sha512", 00:13:15.111 "dhgroup": "ffdhe8192" 00:13:15.111 } 00:13:15.111 } 00:13:15.111 ]' 00:13:15.111 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:15.111 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:15.111 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:15.111 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:15.111 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:15.111 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.111 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.111 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.393 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:00:ZTI0ZGQzMDBhZmM1NDU0OTRiNTc3ODU5MDE3Y2MwZDljYjczYmNmOWYwYzhlMjQ3k5VyGw==: --dhchap-ctrl-secret DHHC-1:03:Nzc0Y2I2NGMzM2E4MTg2MGVkZjgwODg0OGFjZTUzNDFhMzQ1NGNlOWFhOTNiZmRhYzNhN2NiNjllMjEzNTYwMn6RTFs=: 00:13:15.960 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.960 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:15.960 08:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:15.960 08:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.960 08:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:15.960 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:15.960 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:15.960 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:16.219 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:13:16.219 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:16.219 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:16.219 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:16.219 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:16.219 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.219 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.219 08:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:16.219 08:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.219 08:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:16.219 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.219 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.785 00:13:16.785 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:16.785 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:16.785 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.078 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.078 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.078 08:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.078 08:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.078 08:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.078 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:17.078 { 00:13:17.078 "cntlid": 139, 00:13:17.078 "qid": 0, 00:13:17.078 "state": "enabled", 00:13:17.078 "listen_address": { 00:13:17.078 "trtype": "TCP", 00:13:17.078 "adrfam": "IPv4", 00:13:17.078 "traddr": "10.0.0.2", 00:13:17.078 "trsvcid": "4420" 00:13:17.078 }, 00:13:17.078 "peer_address": { 00:13:17.078 "trtype": "TCP", 00:13:17.078 "adrfam": "IPv4", 00:13:17.078 "traddr": "10.0.0.1", 00:13:17.078 "trsvcid": "38018" 00:13:17.078 }, 00:13:17.078 "auth": { 00:13:17.078 "state": "completed", 00:13:17.078 "digest": "sha512", 00:13:17.078 "dhgroup": "ffdhe8192" 00:13:17.078 } 00:13:17.078 } 00:13:17.078 ]' 00:13:17.078 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:17.078 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:17.078 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:17.078 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:17.078 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:17.078 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.078 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.078 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.337 08:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:01:MTUzNTViZDI4M2M4ODIxN2NkOTcyNzg5YTlkMDU5YTQcxddQ: --dhchap-ctrl-secret DHHC-1:02:OTQzNjliN2NhOGUxMGUzMGQ2ZDlkNDI4MWE3ZGZkYjZlOGE2M2Y1NjZhNzBmOTNkFzMkSg==: 00:13:17.903 08:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.903 08:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:17.903 08:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.903 08:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.162 08:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.162 08:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:18.162 08:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:18.162 08:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:18.420 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:13:18.420 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:18.420 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:18.420 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:18.420 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:18.420 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.420 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:18.420 08:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.420 08:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.420 08:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.420 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:18.420 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:18.986 00:13:18.986 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:18.986 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:18.986 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.244 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.244 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.244 08:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:19.244 08:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.244 08:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:19.244 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:19.244 { 00:13:19.244 "cntlid": 141, 00:13:19.244 "qid": 0, 00:13:19.244 "state": "enabled", 00:13:19.244 "listen_address": { 00:13:19.244 "trtype": "TCP", 00:13:19.244 "adrfam": "IPv4", 00:13:19.244 "traddr": "10.0.0.2", 00:13:19.244 "trsvcid": "4420" 00:13:19.244 }, 00:13:19.244 "peer_address": { 00:13:19.244 "trtype": "TCP", 00:13:19.244 "adrfam": "IPv4", 00:13:19.244 "traddr": "10.0.0.1", 00:13:19.244 "trsvcid": "51032" 00:13:19.244 }, 00:13:19.244 "auth": { 00:13:19.244 "state": "completed", 00:13:19.244 "digest": "sha512", 00:13:19.244 "dhgroup": "ffdhe8192" 00:13:19.244 } 00:13:19.244 } 00:13:19.244 ]' 00:13:19.244 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:19.244 08:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:19.244 08:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:19.244 08:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:19.244 08:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:19.502 08:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.502 08:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.502 08:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.502 08:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:02:N2UwNGUxZGJmOTEzMmM4OTBkYTMxZjEyMzg3NjMwN2MzNTg4MDczNTMxOGFiYzQ3tnOQRw==: --dhchap-ctrl-secret DHHC-1:01:NTNlYzJjNjdjZjIzM2VkNjMxYjc5MmU0MGNkYjY1NjZ5rYSD: 00:13:20.068 08:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.068 08:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:20.068 08:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:20.068 08:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.068 08:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:20.068 08:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:20.068 08:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:20.068 08:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:20.353 08:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:13:20.353 08:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:20.353 08:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:20.353 08:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:20.353 08:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:20.353 08:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.353 08:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key3 00:13:20.353 08:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:20.353 08:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.612 08:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:20.612 08:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:20.612 08:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:21.178 00:13:21.178 08:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:21.178 08:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:21.178 08:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.436 08:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.436 08:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.436 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:21.436 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.436 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:21.437 08:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:21.437 { 00:13:21.437 "cntlid": 143, 00:13:21.437 "qid": 0, 00:13:21.437 "state": "enabled", 00:13:21.437 "listen_address": { 00:13:21.437 "trtype": "TCP", 00:13:21.437 "adrfam": "IPv4", 00:13:21.437 "traddr": "10.0.0.2", 00:13:21.437 "trsvcid": "4420" 00:13:21.437 }, 00:13:21.437 "peer_address": { 00:13:21.437 "trtype": "TCP", 00:13:21.437 "adrfam": "IPv4", 00:13:21.437 "traddr": "10.0.0.1", 00:13:21.437 "trsvcid": "51064" 00:13:21.437 }, 00:13:21.437 "auth": { 00:13:21.437 "state": "completed", 00:13:21.437 "digest": "sha512", 00:13:21.437 "dhgroup": "ffdhe8192" 00:13:21.437 } 00:13:21.437 } 00:13:21.437 ]' 00:13:21.437 08:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:21.437 08:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.437 08:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:21.437 08:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:21.437 08:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:21.437 08:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.437 08:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.437 08:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.005 08:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:03:MWYxMDYxNTQwNTMyMDZlZjhmMDM5ODE2OTgxOGU1NDVlMzUzY2I3MWMyZjhjOGUzMTRhM2MzODlmMTEzNmZiZda4AhE=: 00:13:22.572 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.572 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:22.573 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:22.573 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.573 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:22.573 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:22.573 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:13:22.573 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:22.573 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:22.573 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:22.573 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:22.573 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:13:22.573 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:22.573 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:22.573 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:22.573 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:22.573 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.573 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:22.573 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:22.573 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.573 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:22.573 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:22.573 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.508 00:13:23.508 08:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:23.508 08:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:23.508 08:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.508 08:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.508 08:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.508 08:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:23.508 08:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.508 08:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:23.508 08:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:23.508 { 00:13:23.508 "cntlid": 145, 00:13:23.508 "qid": 0, 00:13:23.508 "state": "enabled", 00:13:23.508 "listen_address": { 00:13:23.508 "trtype": "TCP", 00:13:23.508 "adrfam": "IPv4", 00:13:23.508 "traddr": "10.0.0.2", 00:13:23.508 "trsvcid": "4420" 00:13:23.508 }, 00:13:23.508 "peer_address": { 00:13:23.508 "trtype": "TCP", 00:13:23.508 "adrfam": "IPv4", 00:13:23.508 "traddr": "10.0.0.1", 00:13:23.508 "trsvcid": "51090" 00:13:23.508 }, 00:13:23.508 "auth": { 00:13:23.508 "state": "completed", 00:13:23.508 "digest": "sha512", 00:13:23.508 "dhgroup": "ffdhe8192" 00:13:23.508 } 00:13:23.508 } 00:13:23.508 ]' 00:13:23.508 08:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:23.508 08:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:23.508 08:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:23.767 08:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:23.767 08:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:23.767 08:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.767 08:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.767 08:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.026 08:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:00:ZTI0ZGQzMDBhZmM1NDU0OTRiNTc3ODU5MDE3Y2MwZDljYjczYmNmOWYwYzhlMjQ3k5VyGw==: --dhchap-ctrl-secret DHHC-1:03:Nzc0Y2I2NGMzM2E4MTg2MGVkZjgwODg0OGFjZTUzNDFhMzQ1NGNlOWFhOTNiZmRhYzNhN2NiNjllMjEzNTYwMn6RTFs=: 00:13:24.961 08:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.961 08:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:24.961 08:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:24.961 08:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.961 08:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:24.961 08:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key1 00:13:24.961 08:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:24.961 08:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.961 08:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:24.961 08:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:24.961 08:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:13:24.961 08:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:24.961 08:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:13:24.961 08:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:24.961 08:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:13:24.961 08:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:24.961 08:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:24.961 08:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:25.528 request: 00:13:25.528 { 00:13:25.528 "name": "nvme0", 00:13:25.528 "trtype": "tcp", 00:13:25.528 "traddr": "10.0.0.2", 00:13:25.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab", 00:13:25.528 "adrfam": "ipv4", 00:13:25.528 "trsvcid": "4420", 00:13:25.528 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:25.528 "dhchap_key": "key2", 00:13:25.528 "method": "bdev_nvme_attach_controller", 00:13:25.528 "req_id": 1 00:13:25.528 } 00:13:25.528 Got JSON-RPC error response 00:13:25.528 response: 00:13:25.528 { 00:13:25.528 "code": -5, 00:13:25.528 "message": "Input/output error" 00:13:25.528 } 00:13:25.528 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:13:25.528 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:25.528 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:25.529 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:25.529 08:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:25.529 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:25.529 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.529 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:25.529 08:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.529 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:25.529 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.529 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:25.529 08:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:25.529 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:13:25.529 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:25.529 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:13:25.529 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:25.529 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:13:25.529 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:25.529 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:25.529 08:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:26.096 request: 00:13:26.096 { 00:13:26.096 "name": "nvme0", 00:13:26.096 "trtype": "tcp", 00:13:26.096 "traddr": "10.0.0.2", 00:13:26.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab", 00:13:26.096 "adrfam": "ipv4", 00:13:26.096 "trsvcid": "4420", 00:13:26.096 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:26.096 "dhchap_key": "key1", 00:13:26.096 "dhchap_ctrlr_key": "ckey2", 00:13:26.096 "method": "bdev_nvme_attach_controller", 00:13:26.096 "req_id": 1 00:13:26.096 } 00:13:26.096 Got JSON-RPC error response 00:13:26.096 response: 00:13:26.096 { 00:13:26.096 "code": -5, 00:13:26.096 "message": "Input/output error" 00:13:26.096 } 00:13:26.096 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:13:26.096 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:26.096 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:26.096 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:26.096 08:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:26.096 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:26.096 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.096 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:26.096 08:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key1 00:13:26.096 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:26.096 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.096 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:26.096 08:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.096 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:13:26.096 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.096 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:13:26.096 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:26.096 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:13:26.096 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:26.096 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.096 08:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.665 request: 00:13:26.665 { 00:13:26.665 "name": "nvme0", 00:13:26.665 "trtype": "tcp", 00:13:26.665 "traddr": "10.0.0.2", 00:13:26.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab", 00:13:26.665 "adrfam": "ipv4", 00:13:26.665 "trsvcid": "4420", 00:13:26.665 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:26.665 "dhchap_key": "key1", 00:13:26.665 "dhchap_ctrlr_key": "ckey1", 00:13:26.665 "method": "bdev_nvme_attach_controller", 00:13:26.665 "req_id": 1 00:13:26.665 } 00:13:26.665 Got JSON-RPC error response 00:13:26.665 response: 00:13:26.665 { 00:13:26.665 "code": -5, 00:13:26.665 "message": "Input/output error" 00:13:26.665 } 00:13:26.665 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:13:26.665 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:26.665 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:26.665 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:26.665 08:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:26.665 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:26.665 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.665 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:26.665 08:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 69307 00:13:26.665 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 69307 ']' 00:13:26.665 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 69307 00:13:26.665 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:13:26.665 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:26.665 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 69307 00:13:26.665 killing process with pid 69307 00:13:26.665 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:26.665 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:26.665 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 69307' 00:13:26.665 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 69307 00:13:26.665 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 69307 00:13:26.924 08:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:26.924 08:08:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:26.924 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:26.924 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.924 08:08:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=72324 00:13:26.924 08:08:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:26.924 08:08:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 72324 00:13:26.924 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 72324 ']' 00:13:26.924 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.924 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:26.924 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.924 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:26.924 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.861 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:27.861 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:13:27.861 08:08:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:27.861 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:27.861 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.861 08:08:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.861 08:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:27.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.862 08:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 72324 00:13:27.862 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 72324 ']' 00:13:27.862 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.862 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:27.862 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.862 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:27.862 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.121 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:28.121 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:13:28.121 08:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:13:28.121 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:28.121 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.379 08:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:28.379 08:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:13:28.379 08:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:28.379 08:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:28.379 08:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:28.379 08:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:28.379 08:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.379 08:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key3 00:13:28.379 08:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:28.379 08:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.379 08:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:28.379 08:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:28.379 08:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:28.963 00:13:28.963 08:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:28.963 08:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.963 08:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:29.222 08:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.222 08:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.222 08:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:29.222 08:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.222 08:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:29.222 08:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:29.222 { 00:13:29.222 "cntlid": 1, 00:13:29.222 "qid": 0, 00:13:29.222 "state": "enabled", 00:13:29.222 "listen_address": { 00:13:29.222 "trtype": "TCP", 00:13:29.222 "adrfam": "IPv4", 00:13:29.222 "traddr": "10.0.0.2", 00:13:29.222 "trsvcid": "4420" 00:13:29.222 }, 00:13:29.222 "peer_address": { 00:13:29.222 "trtype": "TCP", 00:13:29.222 "adrfam": "IPv4", 00:13:29.222 "traddr": "10.0.0.1", 00:13:29.222 "trsvcid": "59896" 00:13:29.222 }, 00:13:29.222 "auth": { 00:13:29.222 "state": "completed", 00:13:29.222 "digest": "sha512", 00:13:29.222 "dhgroup": "ffdhe8192" 00:13:29.222 } 00:13:29.222 } 00:13:29.222 ]' 00:13:29.222 08:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:29.481 08:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:29.481 08:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:29.481 08:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:29.481 08:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:29.481 08:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.481 08:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.481 08:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.740 08:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid 0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-secret DHHC-1:03:MWYxMDYxNTQwNTMyMDZlZjhmMDM5ODE2OTgxOGU1NDVlMzUzY2I3MWMyZjhjOGUzMTRhM2MzODlmMTEzNmZiZda4AhE=: 00:13:30.309 08:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.309 08:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:30.309 08:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:30.309 08:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.309 08:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:30.309 08:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --dhchap-key key3 00:13:30.309 08:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:30.309 08:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.309 08:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:30.309 08:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:30.309 08:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:30.876 08:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:30.876 08:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:13:30.876 08:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:30.876 08:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:13:30.876 08:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:30.876 08:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:13:30.876 08:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:30.876 08:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:30.876 08:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:30.876 request: 00:13:30.876 { 00:13:30.876 "name": "nvme0", 00:13:30.876 "trtype": "tcp", 00:13:30.876 "traddr": "10.0.0.2", 00:13:30.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab", 00:13:30.876 "adrfam": "ipv4", 00:13:30.876 "trsvcid": "4420", 00:13:30.876 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:30.876 "dhchap_key": "key3", 00:13:30.876 "method": "bdev_nvme_attach_controller", 00:13:30.876 "req_id": 1 00:13:30.876 } 00:13:30.876 Got JSON-RPC error response 00:13:30.876 response: 00:13:30.876 { 00:13:30.876 "code": -5, 00:13:30.876 "message": "Input/output error" 00:13:30.876 } 00:13:30.876 08:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:13:30.876 08:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:30.876 08:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:30.876 08:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:31.136 08:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:13:31.136 08:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:13:31.136 08:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:31.136 08:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:31.395 08:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:31.395 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:13:31.395 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:31.395 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:13:31.395 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:31.395 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:13:31.395 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:31.395 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:31.395 08:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:31.395 request: 00:13:31.395 { 00:13:31.395 "name": "nvme0", 00:13:31.395 "trtype": "tcp", 00:13:31.395 "traddr": "10.0.0.2", 00:13:31.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab", 00:13:31.395 "adrfam": "ipv4", 00:13:31.395 "trsvcid": "4420", 00:13:31.395 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:31.395 "dhchap_key": "key3", 00:13:31.395 "method": "bdev_nvme_attach_controller", 00:13:31.395 "req_id": 1 00:13:31.395 } 00:13:31.395 Got JSON-RPC error response 00:13:31.395 response: 00:13:31.395 { 00:13:31.395 "code": -5, 00:13:31.395 "message": "Input/output error" 00:13:31.395 } 00:13:31.395 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:13:31.395 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:31.395 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:31.395 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:31.395 08:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:31.395 08:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:13:31.395 08:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:31.395 08:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:31.395 08:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:31.395 08:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:31.655 08:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:31.655 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:31.655 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.655 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:31.655 08:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:31.655 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:31.655 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.655 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:31.655 08:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:31.655 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:13:31.655 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:31.655 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:13:31.655 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:31.655 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:13:31.655 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:31.655 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:31.655 08:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:31.914 request: 00:13:31.914 { 00:13:31.914 "name": "nvme0", 00:13:31.914 "trtype": "tcp", 00:13:31.914 "traddr": "10.0.0.2", 00:13:31.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab", 00:13:31.914 "adrfam": "ipv4", 00:13:31.914 "trsvcid": "4420", 00:13:31.914 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:31.914 "dhchap_key": "key0", 00:13:31.914 "dhchap_ctrlr_key": "key1", 00:13:31.914 "method": "bdev_nvme_attach_controller", 00:13:31.914 "req_id": 1 00:13:31.914 } 00:13:31.914 Got JSON-RPC error response 00:13:31.914 response: 00:13:31.914 { 00:13:31.914 "code": -5, 00:13:31.914 "message": "Input/output error" 00:13:31.914 } 00:13:31.914 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:13:31.915 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:31.915 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:31.915 08:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:31.915 08:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:31.915 08:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:32.174 00:13:32.433 08:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:13:32.433 08:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.433 08:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:13:32.433 08:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.433 08:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.433 08:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.692 08:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:13:32.692 08:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:13:32.692 08:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69344 00:13:32.692 08:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 69344 ']' 00:13:32.692 08:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 69344 00:13:32.692 08:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:13:32.692 08:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:32.692 08:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 69344 00:13:32.692 killing process with pid 69344 00:13:32.692 08:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:13:32.692 08:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:13:32.692 08:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 69344' 00:13:32.692 08:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 69344 00:13:32.692 08:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 69344 00:13:33.259 08:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:33.259 08:08:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:33.259 08:08:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:13:33.259 08:08:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:33.259 08:08:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:13:33.259 08:08:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:33.259 08:08:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:33.259 rmmod nvme_tcp 00:13:33.259 rmmod nvme_fabrics 00:13:33.259 rmmod nvme_keyring 00:13:33.259 08:08:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:33.259 08:08:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:13:33.259 08:08:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:13:33.259 08:08:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 72324 ']' 00:13:33.259 08:08:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 72324 00:13:33.259 08:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 72324 ']' 00:13:33.259 08:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 72324 00:13:33.259 08:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:13:33.259 08:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:33.259 08:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 72324 00:13:33.518 killing process with pid 72324 00:13:33.518 08:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:33.518 08:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:33.518 08:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 72324' 00:13:33.518 08:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 72324 00:13:33.518 08:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 72324 00:13:33.518 08:08:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:33.518 08:08:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:33.518 08:08:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:33.518 08:08:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:33.518 08:08:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:33.518 08:08:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.518 08:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.518 08:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.778 08:08:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:33.778 08:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Evz /tmp/spdk.key-sha256.nMi /tmp/spdk.key-sha384.HV3 /tmp/spdk.key-sha512.FTM /tmp/spdk.key-sha512.yUn /tmp/spdk.key-sha384.IW0 /tmp/spdk.key-sha256.Gqp '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:33.778 00:13:33.778 real 2m46.969s 00:13:33.778 user 6m38.975s 00:13:33.778 sys 0m26.487s 00:13:33.778 08:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:33.778 08:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.778 ************************************ 00:13:33.778 END TEST nvmf_auth_target 00:13:33.778 ************************************ 00:13:33.778 08:08:55 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:13:33.778 08:08:55 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:33.778 08:08:55 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:13:33.778 08:08:55 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:33.778 08:08:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:33.778 ************************************ 00:13:33.778 START TEST nvmf_bdevio_no_huge 00:13:33.778 ************************************ 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:33.778 * Looking for test storage... 00:13:33.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:33.778 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:33.779 Cannot find device "nvmf_tgt_br" 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:33.779 Cannot find device "nvmf_tgt_br2" 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:33.779 Cannot find device "nvmf_tgt_br" 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:13:33.779 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:34.038 Cannot find device "nvmf_tgt_br2" 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:34.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:34.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:34.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:13:34.038 00:13:34.038 --- 10.0.0.2 ping statistics --- 00:13:34.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.038 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:13:34.038 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:34.298 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:34.298 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:13:34.298 00:13:34.298 --- 10.0.0.3 ping statistics --- 00:13:34.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.298 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:34.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:34.298 00:13:34.298 --- 10.0.0.1 ping statistics --- 00:13:34.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.298 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=72642 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 72642 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # '[' -z 72642 ']' 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:34.298 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:34.298 [2024-06-10 08:08:55.996599] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:13:34.298 [2024-06-10 08:08:55.996736] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:34.298 [2024-06-10 08:08:56.142147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:34.557 [2024-06-10 08:08:56.302240] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.557 [2024-06-10 08:08:56.302306] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.557 [2024-06-10 08:08:56.302321] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.557 [2024-06-10 08:08:56.302332] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.557 [2024-06-10 08:08:56.302341] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.557 [2024-06-10 08:08:56.303216] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:13:34.557 [2024-06-10 08:08:56.303371] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:13:34.557 [2024-06-10 08:08:56.303498] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:13:34.557 [2024-06-10 08:08:56.303509] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:34.557 [2024-06-10 08:08:56.309617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:35.494 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:35.494 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@863 -- # return 0 00:13:35.494 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:35.494 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:35.494 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:35.494 [2024-06-10 08:08:57.045683] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:35.494 Malloc0 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:35.494 [2024-06-10 08:08:57.095713] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:35.494 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:35.494 { 00:13:35.494 "params": { 00:13:35.494 "name": "Nvme$subsystem", 00:13:35.494 "trtype": "$TEST_TRANSPORT", 00:13:35.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:35.494 "adrfam": "ipv4", 00:13:35.495 "trsvcid": "$NVMF_PORT", 00:13:35.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:35.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:35.495 "hdgst": ${hdgst:-false}, 00:13:35.495 "ddgst": ${ddgst:-false} 00:13:35.495 }, 00:13:35.495 "method": "bdev_nvme_attach_controller" 00:13:35.495 } 00:13:35.495 EOF 00:13:35.495 )") 00:13:35.495 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:13:35.495 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:13:35.495 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:13:35.495 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:35.495 "params": { 00:13:35.495 "name": "Nvme1", 00:13:35.495 "trtype": "tcp", 00:13:35.495 "traddr": "10.0.0.2", 00:13:35.495 "adrfam": "ipv4", 00:13:35.495 "trsvcid": "4420", 00:13:35.495 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:35.495 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:35.495 "hdgst": false, 00:13:35.495 "ddgst": false 00:13:35.495 }, 00:13:35.495 "method": "bdev_nvme_attach_controller" 00:13:35.495 }' 00:13:35.495 [2024-06-10 08:08:57.161031] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:13:35.495 [2024-06-10 08:08:57.161163] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72678 ] 00:13:35.495 [2024-06-10 08:08:57.309691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:35.754 [2024-06-10 08:08:57.466939] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.754 [2024-06-10 08:08:57.467003] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.754 [2024-06-10 08:08:57.467018] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.754 [2024-06-10 08:08:57.480768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:36.013 I/O targets: 00:13:36.013 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:36.013 00:13:36.013 00:13:36.013 CUnit - A unit testing framework for C - Version 2.1-3 00:13:36.013 http://cunit.sourceforge.net/ 00:13:36.013 00:13:36.013 00:13:36.013 Suite: bdevio tests on: Nvme1n1 00:13:36.013 Test: blockdev write read block ...passed 00:13:36.013 Test: blockdev write zeroes read block ...passed 00:13:36.013 Test: blockdev write zeroes read no split ...passed 00:13:36.013 Test: blockdev write zeroes read split ...passed 00:13:36.013 Test: blockdev write zeroes read split partial ...passed 00:13:36.013 Test: blockdev reset ...[2024-06-10 08:08:57.698698] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:36.013 [2024-06-10 08:08:57.698839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15056b0 (9): Bad file descriptor 00:13:36.013 [2024-06-10 08:08:57.718237] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:36.013 passed 00:13:36.013 Test: blockdev write read 8 blocks ...passed 00:13:36.013 Test: blockdev write read size > 128k ...passed 00:13:36.013 Test: blockdev write read invalid size ...passed 00:13:36.013 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:36.013 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:36.013 Test: blockdev write read max offset ...passed 00:13:36.013 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:36.013 Test: blockdev writev readv 8 blocks ...passed 00:13:36.013 Test: blockdev writev readv 30 x 1block ...passed 00:13:36.013 Test: blockdev writev readv block ...passed 00:13:36.013 Test: blockdev writev readv size > 128k ...passed 00:13:36.013 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:36.013 Test: blockdev comparev and writev ...[2024-06-10 08:08:57.728501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.013 [2024-06-10 08:08:57.728630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:36.013 [2024-06-10 08:08:57.728649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.013 [2024-06-10 08:08:57.728660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:36.013 [2024-06-10 08:08:57.729175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.013 [2024-06-10 08:08:57.729202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:36.013 [2024-06-10 08:08:57.729221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.013 [2024-06-10 08:08:57.729230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:36.013 [2024-06-10 08:08:57.729542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.013 [2024-06-10 08:08:57.729565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:36.013 [2024-06-10 08:08:57.729582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.013 [2024-06-10 08:08:57.729592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:36.014 [2024-06-10 08:08:57.730013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.014 [2024-06-10 08:08:57.730058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:36.014 [2024-06-10 08:08:57.730091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.014 [2024-06-10 08:08:57.730117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:36.014 passed 00:13:36.014 Test: blockdev nvme passthru rw ...passed 00:13:36.014 Test: blockdev nvme passthru vendor specific ...[2024-06-10 08:08:57.730983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:36.014 [2024-06-10 08:08:57.731014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:36.014 [2024-06-10 08:08:57.731163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:36.014 [2024-06-10 08:08:57.731201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:36.014 [2024-06-10 08:08:57.731313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:36.014 [2024-06-10 08:08:57.731350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:36.014 [2024-06-10 08:08:57.731455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:36.014 [2024-06-10 08:08:57.731476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:36.014 passed 00:13:36.014 Test: blockdev nvme admin passthru ...passed 00:13:36.014 Test: blockdev copy ...passed 00:13:36.014 00:13:36.014 Run Summary: Type Total Ran Passed Failed Inactive 00:13:36.014 suites 1 1 n/a 0 0 00:13:36.014 tests 23 23 23 0 0 00:13:36.014 asserts 152 152 152 0 n/a 00:13:36.014 00:13:36.014 Elapsed time = 0.199 seconds 00:13:36.273 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.273 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.273 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:36.273 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.273 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:36.273 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:36.273 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:36.273 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:13:36.532 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:36.532 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:13:36.532 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:36.532 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:36.532 rmmod nvme_tcp 00:13:36.532 rmmod nvme_fabrics 00:13:36.532 rmmod nvme_keyring 00:13:36.532 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:36.532 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:13:36.532 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:13:36.532 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 72642 ']' 00:13:36.532 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 72642 00:13:36.532 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@949 -- # '[' -z 72642 ']' 00:13:36.532 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # kill -0 72642 00:13:36.532 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # uname 00:13:36.532 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:36.532 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 72642 00:13:36.532 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:13:36.532 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:13:36.532 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # echo 'killing process with pid 72642' 00:13:36.532 killing process with pid 72642 00:13:36.532 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # kill 72642 00:13:36.532 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # wait 72642 00:13:37.101 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:37.101 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:37.101 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:37.101 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:37.101 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:37.101 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.101 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.101 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.101 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:37.101 00:13:37.101 real 0m3.233s 00:13:37.101 user 0m10.605s 00:13:37.101 sys 0m1.310s 00:13:37.101 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:37.101 08:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:37.101 ************************************ 00:13:37.101 END TEST nvmf_bdevio_no_huge 00:13:37.101 ************************************ 00:13:37.101 08:08:58 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:37.101 08:08:58 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:37.101 08:08:58 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:37.101 08:08:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:37.101 ************************************ 00:13:37.101 START TEST nvmf_tls 00:13:37.101 ************************************ 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:37.102 * Looking for test storage... 00:13:37.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:37.102 Cannot find device "nvmf_tgt_br" 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:37.102 Cannot find device "nvmf_tgt_br2" 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:37.102 Cannot find device "nvmf_tgt_br" 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:37.102 Cannot find device "nvmf_tgt_br2" 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:13:37.102 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:37.362 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:37.362 08:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:37.362 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:37.362 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:37.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:13:37.362 00:13:37.362 --- 10.0.0.2 ping statistics --- 00:13:37.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.362 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:37.362 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:37.362 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.136 ms 00:13:37.362 00:13:37.362 --- 10.0.0.3 ping statistics --- 00:13:37.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.362 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:37.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:13:37.362 00:13:37.362 --- 10.0.0.1 ping statistics --- 00:13:37.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.362 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72860 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:37.362 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72860 00:13:37.621 08:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 72860 ']' 00:13:37.621 08:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.621 08:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:37.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.621 08:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.621 08:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:37.621 08:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:37.621 [2024-06-10 08:08:59.280470] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:13:37.621 [2024-06-10 08:08:59.280628] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.621 [2024-06-10 08:08:59.420959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.880 [2024-06-10 08:08:59.547609] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.880 [2024-06-10 08:08:59.547660] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.880 [2024-06-10 08:08:59.547674] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.880 [2024-06-10 08:08:59.547685] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.880 [2024-06-10 08:08:59.547694] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.880 [2024-06-10 08:08:59.547723] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.448 08:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:38.448 08:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:13:38.448 08:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:38.448 08:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:38.448 08:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.448 08:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.448 08:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:38.448 08:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:39.026 true 00:13:39.026 08:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:39.026 08:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:13:39.284 08:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:13:39.284 08:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:39.284 08:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:39.542 08:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:39.542 08:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:13:39.799 08:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:13:39.799 08:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:39.799 08:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:40.058 08:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:40.058 08:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:13:40.317 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:13:40.317 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:40.317 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:40.317 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:40.574 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:13:40.574 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:40.574 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:40.831 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:40.832 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:41.090 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:13:41.090 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:41.090 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:41.348 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:41.348 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:41.607 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:13:41.607 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:41.607 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:41.607 08:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:41.607 08:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:41.607 08:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:41.607 08:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:13:41.607 08:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:41.607 08:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:41.607 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:41.607 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:41.607 08:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:41.607 08:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:41.607 08:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:41.607 08:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:13:41.607 08:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:41.607 08:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:41.865 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:41.866 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:13:41.866 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.xRygmvED4L 00:13:41.866 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:41.866 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.ObAJbBwMMt 00:13:41.866 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:41.866 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:41.866 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.xRygmvED4L 00:13:41.866 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ObAJbBwMMt 00:13:41.866 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:42.125 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:42.383 [2024-06-10 08:09:04.022413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:42.383 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.xRygmvED4L 00:13:42.383 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.xRygmvED4L 00:13:42.383 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:42.642 [2024-06-10 08:09:04.387242] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.642 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:42.900 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:43.158 [2024-06-10 08:09:04.835338] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:43.158 [2024-06-10 08:09:04.835571] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.158 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:43.417 malloc0 00:13:43.417 08:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:43.676 08:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xRygmvED4L 00:13:43.997 [2024-06-10 08:09:05.555286] tcp.c:3707:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:43.997 08:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.xRygmvED4L 00:13:53.972 Initializing NVMe Controllers 00:13:53.972 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:53.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:53.972 Initialization complete. Launching workers. 00:13:53.972 ======================================================== 00:13:53.972 Latency(us) 00:13:53.972 Device Information : IOPS MiB/s Average min max 00:13:53.972 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7641.25 29.85 8378.59 1183.12 10262.57 00:13:53.972 ======================================================== 00:13:53.972 Total : 7641.25 29.85 8378.59 1183.12 10262.57 00:13:53.972 00:13:53.972 08:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xRygmvED4L 00:13:53.972 08:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:53.972 08:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:53.972 08:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:53.972 08:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xRygmvED4L' 00:13:53.972 08:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:53.972 08:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73097 00:13:53.972 08:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:53.972 08:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:53.972 08:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73097 /var/tmp/bdevperf.sock 00:13:53.972 08:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 73097 ']' 00:13:53.972 08:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:53.972 08:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:53.972 08:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:53.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:53.972 08:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:53.972 08:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.972 [2024-06-10 08:09:15.826540] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:13:53.972 [2024-06-10 08:09:15.826655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73097 ] 00:13:54.230 [2024-06-10 08:09:15.966627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.489 [2024-06-10 08:09:16.111302] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.489 [2024-06-10 08:09:16.169673] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:55.056 08:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:55.056 08:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:13:55.056 08:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xRygmvED4L 00:13:55.315 [2024-06-10 08:09:17.000466] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:55.315 [2024-06-10 08:09:17.000598] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:55.315 TLSTESTn1 00:13:55.315 08:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:55.573 Running I/O for 10 seconds... 00:14:05.546 00:14:05.546 Latency(us) 00:14:05.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.546 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:05.546 Verification LBA range: start 0x0 length 0x2000 00:14:05.546 TLSTESTn1 : 10.02 3644.25 14.24 0.00 0.00 35064.41 4349.21 38606.66 00:14:05.546 =================================================================================================================== 00:14:05.546 Total : 3644.25 14.24 0.00 0.00 35064.41 4349.21 38606.66 00:14:05.546 0 00:14:05.546 08:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:05.546 08:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73097 00:14:05.546 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 73097 ']' 00:14:05.546 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 73097 00:14:05.546 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:14:05.546 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:05.546 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 73097 00:14:05.546 killing process with pid 73097 00:14:05.546 Received shutdown signal, test time was about 10.000000 seconds 00:14:05.546 00:14:05.546 Latency(us) 00:14:05.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.546 =================================================================================================================== 00:14:05.546 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:05.546 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:14:05.546 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:14:05.546 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 73097' 00:14:05.546 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 73097 00:14:05.546 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 73097 00:14:05.546 [2024-06-10 08:09:27.265342] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ObAJbBwMMt 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ObAJbBwMMt 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ObAJbBwMMt 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ObAJbBwMMt' 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73231 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73231 /var/tmp/bdevperf.sock 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 73231 ']' 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:05.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:05.805 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.805 [2024-06-10 08:09:27.596145] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:14:05.805 [2024-06-10 08:09:27.596572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73231 ] 00:14:06.064 [2024-06-10 08:09:27.734310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.064 [2024-06-10 08:09:27.862395] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.322 [2024-06-10 08:09:27.932366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:06.888 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:06.888 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:14:06.888 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ObAJbBwMMt 00:14:07.147 [2024-06-10 08:09:28.849707] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:07.147 [2024-06-10 08:09:28.849895] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:07.147 [2024-06-10 08:09:28.858225] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:07.147 [2024-06-10 08:09:28.858662] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1189fc0 (107): Transport endpoint is not connected 00:14:07.147 [2024-06-10 08:09:28.859650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1189fc0 (9): Bad file descriptor 00:14:07.147 [2024-06-10 08:09:28.860646] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:07.147 [2024-06-10 08:09:28.860676] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:07.147 [2024-06-10 08:09:28.860694] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:07.147 request: 00:14:07.147 { 00:14:07.147 "name": "TLSTEST", 00:14:07.147 "trtype": "tcp", 00:14:07.147 "traddr": "10.0.0.2", 00:14:07.147 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:07.147 "adrfam": "ipv4", 00:14:07.147 "trsvcid": "4420", 00:14:07.147 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.147 "psk": "/tmp/tmp.ObAJbBwMMt", 00:14:07.147 "method": "bdev_nvme_attach_controller", 00:14:07.147 "req_id": 1 00:14:07.147 } 00:14:07.147 Got JSON-RPC error response 00:14:07.147 response: 00:14:07.147 { 00:14:07.147 "code": -5, 00:14:07.147 "message": "Input/output error" 00:14:07.147 } 00:14:07.147 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73231 00:14:07.147 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 73231 ']' 00:14:07.147 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 73231 00:14:07.147 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:14:07.147 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:07.147 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 73231 00:14:07.147 killing process with pid 73231 00:14:07.147 Received shutdown signal, test time was about 10.000000 seconds 00:14:07.147 00:14:07.147 Latency(us) 00:14:07.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.147 =================================================================================================================== 00:14:07.147 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:07.147 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:14:07.147 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:14:07.147 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 73231' 00:14:07.147 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 73231 00:14:07.147 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 73231 00:14:07.147 [2024-06-10 08:09:28.930645] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xRygmvED4L 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xRygmvED4L 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xRygmvED4L 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xRygmvED4L' 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73264 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73264 /var/tmp/bdevperf.sock 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 73264 ']' 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:07.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:07.406 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.406 [2024-06-10 08:09:29.252111] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:14:07.406 [2024-06-10 08:09:29.252235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73264 ] 00:14:07.664 [2024-06-10 08:09:29.391157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.664 [2024-06-10 08:09:29.518592] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.922 [2024-06-10 08:09:29.589227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:08.488 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:08.488 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:14:08.488 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.xRygmvED4L 00:14:08.747 [2024-06-10 08:09:30.450447] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:08.747 [2024-06-10 08:09:30.450621] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:08.747 [2024-06-10 08:09:30.458267] tcp.c: 933:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:08.747 [2024-06-10 08:09:30.458310] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:08.747 [2024-06-10 08:09:30.458364] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:08.747 [2024-06-10 08:09:30.458429] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa37fc0 (107): Transport endpoint is not connected 00:14:08.747 [2024-06-10 08:09:30.459416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa37fc0 (9): Bad file descriptor 00:14:08.747 [2024-06-10 08:09:30.460412] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:08.747 [2024-06-10 08:09:30.460444] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:08.747 [2024-06-10 08:09:30.460463] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:08.747 request: 00:14:08.747 { 00:14:08.747 "name": "TLSTEST", 00:14:08.747 "trtype": "tcp", 00:14:08.747 "traddr": "10.0.0.2", 00:14:08.747 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:08.747 "adrfam": "ipv4", 00:14:08.747 "trsvcid": "4420", 00:14:08.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.747 "psk": "/tmp/tmp.xRygmvED4L", 00:14:08.747 "method": "bdev_nvme_attach_controller", 00:14:08.747 "req_id": 1 00:14:08.747 } 00:14:08.747 Got JSON-RPC error response 00:14:08.747 response: 00:14:08.747 { 00:14:08.747 "code": -5, 00:14:08.747 "message": "Input/output error" 00:14:08.747 } 00:14:08.747 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73264 00:14:08.747 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 73264 ']' 00:14:08.747 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 73264 00:14:08.747 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:14:08.747 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:08.747 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 73264 00:14:08.747 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:14:08.747 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:14:08.747 killing process with pid 73264 00:14:08.747 Received shutdown signal, test time was about 10.000000 seconds 00:14:08.747 00:14:08.747 Latency(us) 00:14:08.747 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.747 =================================================================================================================== 00:14:08.747 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:08.747 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 73264' 00:14:08.747 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 73264 00:14:08.747 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 73264 00:14:08.747 [2024-06-10 08:09:30.507656] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:09.005 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:09.005 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:14:09.005 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:09.005 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:09.005 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:09.005 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xRygmvED4L 00:14:09.005 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:14:09.005 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xRygmvED4L 00:14:09.005 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:14:09.005 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:09.005 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:14:09.005 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:09.005 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xRygmvED4L 00:14:09.005 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:09.005 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:09.006 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:09.006 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xRygmvED4L' 00:14:09.006 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:09.006 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73286 00:14:09.006 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:09.006 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73286 /var/tmp/bdevperf.sock 00:14:09.006 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 73286 ']' 00:14:09.006 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:09.006 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:09.006 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:09.006 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:09.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:09.006 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:09.006 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.006 [2024-06-10 08:09:30.832386] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:14:09.006 [2024-06-10 08:09:30.832502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73286 ] 00:14:09.263 [2024-06-10 08:09:30.970154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.263 [2024-06-10 08:09:31.098335] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.522 [2024-06-10 08:09:31.167690] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:10.088 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:10.088 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:14:10.088 08:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xRygmvED4L 00:14:10.347 [2024-06-10 08:09:32.001295] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:10.347 [2024-06-10 08:09:32.001485] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:10.347 [2024-06-10 08:09:32.008458] tcp.c: 933:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:10.347 [2024-06-10 08:09:32.008508] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:10.347 [2024-06-10 08:09:32.008563] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:10.347 [2024-06-10 08:09:32.009255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a54fc0 (107): Transport endpoint is not connected 00:14:10.347 [2024-06-10 08:09:32.010241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a54fc0 (9): Bad file descriptor 00:14:10.347 [2024-06-10 08:09:32.011237] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:10.347 [2024-06-10 08:09:32.011267] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:10.347 [2024-06-10 08:09:32.011285] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:10.347 request: 00:14:10.347 { 00:14:10.347 "name": "TLSTEST", 00:14:10.347 "trtype": "tcp", 00:14:10.347 "traddr": "10.0.0.2", 00:14:10.347 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:10.347 "adrfam": "ipv4", 00:14:10.347 "trsvcid": "4420", 00:14:10.347 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:10.347 "psk": "/tmp/tmp.xRygmvED4L", 00:14:10.347 "method": "bdev_nvme_attach_controller", 00:14:10.347 "req_id": 1 00:14:10.347 } 00:14:10.347 Got JSON-RPC error response 00:14:10.347 response: 00:14:10.347 { 00:14:10.347 "code": -5, 00:14:10.347 "message": "Input/output error" 00:14:10.347 } 00:14:10.347 08:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73286 00:14:10.347 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 73286 ']' 00:14:10.347 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 73286 00:14:10.347 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:14:10.347 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:10.347 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 73286 00:14:10.347 killing process with pid 73286 00:14:10.347 Received shutdown signal, test time was about 10.000000 seconds 00:14:10.347 00:14:10.347 Latency(us) 00:14:10.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.347 =================================================================================================================== 00:14:10.347 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:10.347 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:14:10.347 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:14:10.347 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 73286' 00:14:10.347 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 73286 00:14:10.347 [2024-06-10 08:09:32.058262] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:10.347 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 73286 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73314 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73314 /var/tmp/bdevperf.sock 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 73314 ']' 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:10.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:10.605 08:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.605 [2024-06-10 08:09:32.388636] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:14:10.606 [2024-06-10 08:09:32.389079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73314 ] 00:14:10.863 [2024-06-10 08:09:32.528966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.863 [2024-06-10 08:09:32.658455] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.863 [2024-06-10 08:09:32.728331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:11.837 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:11.837 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:14:11.837 08:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:11.837 [2024-06-10 08:09:33.558686] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:11.837 [2024-06-10 08:09:33.560164] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe4710 (9): Bad file descriptor 00:14:11.837 [2024-06-10 08:09:33.561154] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:11.837 [2024-06-10 08:09:33.561189] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:11.837 [2024-06-10 08:09:33.561208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:11.837 request: 00:14:11.837 { 00:14:11.837 "name": "TLSTEST", 00:14:11.837 "trtype": "tcp", 00:14:11.837 "traddr": "10.0.0.2", 00:14:11.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:11.837 "adrfam": "ipv4", 00:14:11.837 "trsvcid": "4420", 00:14:11.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.837 "method": "bdev_nvme_attach_controller", 00:14:11.837 "req_id": 1 00:14:11.837 } 00:14:11.837 Got JSON-RPC error response 00:14:11.837 response: 00:14:11.837 { 00:14:11.837 "code": -5, 00:14:11.837 "message": "Input/output error" 00:14:11.837 } 00:14:11.837 08:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73314 00:14:11.837 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 73314 ']' 00:14:11.837 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 73314 00:14:11.837 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:14:11.837 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:11.837 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 73314 00:14:11.837 killing process with pid 73314 00:14:11.837 Received shutdown signal, test time was about 10.000000 seconds 00:14:11.837 00:14:11.837 Latency(us) 00:14:11.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.837 =================================================================================================================== 00:14:11.837 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:11.837 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:14:11.837 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:14:11.837 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 73314' 00:14:11.837 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 73314 00:14:11.837 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 73314 00:14:12.097 08:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:12.097 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:14:12.097 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:12.097 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:12.097 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:12.097 08:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 72860 00:14:12.097 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 72860 ']' 00:14:12.097 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 72860 00:14:12.097 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:14:12.097 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:12.097 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 72860 00:14:12.097 killing process with pid 72860 00:14:12.097 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:12.097 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:12.097 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 72860' 00:14:12.097 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 72860 00:14:12.097 [2024-06-10 08:09:33.895463] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:12.097 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 72860 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.URGuEZqjnl 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.URGuEZqjnl 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73352 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73352 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 73352 ']' 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:12.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:12.355 08:09:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:12.614 [2024-06-10 08:09:34.250318] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:14:12.614 [2024-06-10 08:09:34.250419] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.614 [2024-06-10 08:09:34.386098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.872 [2024-06-10 08:09:34.500923] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.872 [2024-06-10 08:09:34.500991] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.872 [2024-06-10 08:09:34.501003] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.872 [2024-06-10 08:09:34.501012] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.872 [2024-06-10 08:09:34.501019] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.872 [2024-06-10 08:09:34.501049] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.872 [2024-06-10 08:09:34.553975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:13.439 08:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:13.439 08:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:14:13.439 08:09:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.439 08:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:13.439 08:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.439 08:09:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.439 08:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.URGuEZqjnl 00:14:13.439 08:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.URGuEZqjnl 00:14:13.439 08:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:13.697 [2024-06-10 08:09:35.464907] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.697 08:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:13.956 08:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:14.213 [2024-06-10 08:09:35.936969] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:14.213 [2024-06-10 08:09:35.937204] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.213 08:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:14.472 malloc0 00:14:14.472 08:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:14.730 08:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.URGuEZqjnl 00:14:14.988 [2024-06-10 08:09:36.692743] tcp.c:3707:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:14.988 08:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.URGuEZqjnl 00:14:14.988 08:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:14.988 08:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:14.988 08:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:14.988 08:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.URGuEZqjnl' 00:14:14.988 08:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:14.988 08:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73407 00:14:14.989 08:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:14.989 08:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:14.989 08:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73407 /var/tmp/bdevperf.sock 00:14:14.989 08:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 73407 ']' 00:14:14.989 08:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.989 08:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:14.989 08:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.989 08:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:14.989 08:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.989 [2024-06-10 08:09:36.761973] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:14:14.989 [2024-06-10 08:09:36.762080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73407 ] 00:14:15.247 [2024-06-10 08:09:36.898021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.247 [2024-06-10 08:09:37.031139] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.247 [2024-06-10 08:09:37.100726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:16.182 08:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:16.182 08:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:14:16.182 08:09:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.URGuEZqjnl 00:14:16.182 [2024-06-10 08:09:37.897032] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:16.182 [2024-06-10 08:09:37.897215] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:16.182 TLSTESTn1 00:14:16.182 08:09:37 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:16.441 Running I/O for 10 seconds... 00:14:26.524 00:14:26.524 Latency(us) 00:14:26.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.524 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:26.524 Verification LBA range: start 0x0 length 0x2000 00:14:26.524 TLSTESTn1 : 10.01 3819.99 14.92 0.00 0.00 33456.33 5749.29 28597.53 00:14:26.524 =================================================================================================================== 00:14:26.524 Total : 3819.99 14.92 0.00 0.00 33456.33 5749.29 28597.53 00:14:26.524 0 00:14:26.525 08:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:26.525 08:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73407 00:14:26.525 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 73407 ']' 00:14:26.525 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 73407 00:14:26.525 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:14:26.525 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:26.525 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 73407 00:14:26.525 killing process with pid 73407 00:14:26.525 Received shutdown signal, test time was about 10.000000 seconds 00:14:26.525 00:14:26.525 Latency(us) 00:14:26.525 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.525 =================================================================================================================== 00:14:26.525 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:26.525 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:14:26.525 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:14:26.525 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 73407' 00:14:26.525 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 73407 00:14:26.525 [2024-06-10 08:09:48.149024] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:26.525 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 73407 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.URGuEZqjnl 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.URGuEZqjnl 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.URGuEZqjnl 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.URGuEZqjnl 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.URGuEZqjnl' 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73536 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73536 /var/tmp/bdevperf.sock 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 73536 ']' 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:26.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:26.784 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.784 [2024-06-10 08:09:48.484698] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:14:26.784 [2024-06-10 08:09:48.484835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73536 ] 00:14:26.784 [2024-06-10 08:09:48.618267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.044 [2024-06-10 08:09:48.749397] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.044 [2024-06-10 08:09:48.818106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:27.612 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:27.612 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:14:27.612 08:09:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.URGuEZqjnl 00:14:27.872 [2024-06-10 08:09:49.667465] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:27.872 [2024-06-10 08:09:49.667620] bdev_nvme.c:6116:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:27.872 [2024-06-10 08:09:49.667634] bdev_nvme.c:6225:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.URGuEZqjnl 00:14:27.872 request: 00:14:27.872 { 00:14:27.872 "name": "TLSTEST", 00:14:27.872 "trtype": "tcp", 00:14:27.872 "traddr": "10.0.0.2", 00:14:27.872 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:27.872 "adrfam": "ipv4", 00:14:27.872 "trsvcid": "4420", 00:14:27.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:27.872 "psk": "/tmp/tmp.URGuEZqjnl", 00:14:27.872 "method": "bdev_nvme_attach_controller", 00:14:27.872 "req_id": 1 00:14:27.872 } 00:14:27.872 Got JSON-RPC error response 00:14:27.872 response: 00:14:27.872 { 00:14:27.872 "code": -1, 00:14:27.872 "message": "Operation not permitted" 00:14:27.872 } 00:14:27.872 08:09:49 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73536 00:14:27.872 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 73536 ']' 00:14:27.872 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 73536 00:14:27.872 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:14:27.872 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:27.872 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 73536 00:14:27.872 killing process with pid 73536 00:14:27.872 Received shutdown signal, test time was about 10.000000 seconds 00:14:27.872 00:14:27.872 Latency(us) 00:14:27.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.872 =================================================================================================================== 00:14:27.872 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:27.872 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:14:27.872 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:14:27.872 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 73536' 00:14:27.872 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 73536 00:14:27.872 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 73536 00:14:28.132 08:09:49 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:28.132 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:14:28.132 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:28.132 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:28.132 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:28.133 08:09:49 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 73352 00:14:28.133 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 73352 ']' 00:14:28.133 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 73352 00:14:28.133 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:14:28.133 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:28.133 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 73352 00:14:28.392 killing process with pid 73352 00:14:28.392 08:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:28.392 08:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:28.392 08:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 73352' 00:14:28.392 08:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 73352 00:14:28.392 [2024-06-10 08:09:50.006188] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:28.392 08:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 73352 00:14:28.392 08:09:50 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:28.392 08:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:28.392 08:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:28.392 08:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:28.392 08:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:28.392 08:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73574 00:14:28.392 08:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73574 00:14:28.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.392 08:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 73574 ']' 00:14:28.392 08:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.392 08:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:28.392 08:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.392 08:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:28.392 08:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:28.652 [2024-06-10 08:09:50.292453] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:14:28.652 [2024-06-10 08:09:50.292576] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.652 [2024-06-10 08:09:50.427455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.652 [2024-06-10 08:09:50.508601] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.652 [2024-06-10 08:09:50.508692] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.652 [2024-06-10 08:09:50.508717] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.652 [2024-06-10 08:09:50.508725] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.652 [2024-06-10 08:09:50.508731] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.652 [2024-06-10 08:09:50.508762] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.911 [2024-06-10 08:09:50.567574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:29.480 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:29.480 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:14:29.480 08:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:29.480 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:29.480 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:29.480 08:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.480 08:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.URGuEZqjnl 00:14:29.480 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:14:29.480 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.URGuEZqjnl 00:14:29.480 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:14:29.480 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:29.480 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:14:29.480 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:29.480 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.URGuEZqjnl 00:14:29.480 08:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.URGuEZqjnl 00:14:29.480 08:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:29.739 [2024-06-10 08:09:51.506956] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.739 08:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:29.998 08:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:30.257 [2024-06-10 08:09:51.959078] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:30.257 [2024-06-10 08:09:51.959341] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.257 08:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:30.515 malloc0 00:14:30.515 08:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:30.774 08:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.URGuEZqjnl 00:14:30.774 [2024-06-10 08:09:52.630677] tcp.c:3617:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:30.774 [2024-06-10 08:09:52.630747] tcp.c:3703:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:30.774 [2024-06-10 08:09:52.630808] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:30.774 request: 00:14:30.774 { 00:14:30.774 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.774 "host": "nqn.2016-06.io.spdk:host1", 00:14:30.774 "psk": "/tmp/tmp.URGuEZqjnl", 00:14:30.774 "method": "nvmf_subsystem_add_host", 00:14:30.774 "req_id": 1 00:14:30.774 } 00:14:30.774 Got JSON-RPC error response 00:14:30.774 response: 00:14:30.774 { 00:14:30.774 "code": -32603, 00:14:30.774 "message": "Internal error" 00:14:30.774 } 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 73574 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 73574 ']' 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 73574 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 73574 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:31.033 killing process with pid 73574 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 73574' 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 73574 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 73574 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.URGuEZqjnl 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73637 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73637 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 73637 ']' 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:31.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:31.033 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:31.292 [2024-06-10 08:09:52.955020] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:14:31.292 [2024-06-10 08:09:52.955142] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.292 [2024-06-10 08:09:53.091141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.551 [2024-06-10 08:09:53.203585] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.551 [2024-06-10 08:09:53.203637] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.551 [2024-06-10 08:09:53.203649] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.551 [2024-06-10 08:09:53.203658] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.551 [2024-06-10 08:09:53.203666] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.551 [2024-06-10 08:09:53.203691] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.551 [2024-06-10 08:09:53.262389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:32.119 08:09:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:32.119 08:09:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:14:32.119 08:09:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:32.119 08:09:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:32.119 08:09:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:32.119 08:09:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.119 08:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.URGuEZqjnl 00:14:32.119 08:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.URGuEZqjnl 00:14:32.119 08:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:32.378 [2024-06-10 08:09:54.101119] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.378 08:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:32.637 08:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:32.896 [2024-06-10 08:09:54.581262] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:32.896 [2024-06-10 08:09:54.581562] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.896 08:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:33.155 malloc0 00:14:33.155 08:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:33.155 08:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.URGuEZqjnl 00:14:33.414 [2024-06-10 08:09:55.224906] tcp.c:3707:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:33.414 08:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=73686 00:14:33.414 08:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:33.414 08:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:33.414 08:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 73686 /var/tmp/bdevperf.sock 00:14:33.414 08:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 73686 ']' 00:14:33.414 08:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:33.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:33.414 08:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:33.414 08:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:33.414 08:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:33.414 08:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:33.674 [2024-06-10 08:09:55.291877] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:14:33.674 [2024-06-10 08:09:55.291985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73686 ] 00:14:33.674 [2024-06-10 08:09:55.429462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.932 [2024-06-10 08:09:55.566696] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.932 [2024-06-10 08:09:55.639967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:34.499 08:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:34.499 08:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:14:34.499 08:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.URGuEZqjnl 00:14:34.759 [2024-06-10 08:09:56.452431] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:34.759 [2024-06-10 08:09:56.452609] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:34.759 TLSTESTn1 00:14:34.759 08:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:35.018 08:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:14:35.018 "subsystems": [ 00:14:35.018 { 00:14:35.018 "subsystem": "keyring", 00:14:35.018 "config": [] 00:14:35.018 }, 00:14:35.018 { 00:14:35.018 "subsystem": "iobuf", 00:14:35.018 "config": [ 00:14:35.018 { 00:14:35.018 "method": "iobuf_set_options", 00:14:35.018 "params": { 00:14:35.018 "small_pool_count": 8192, 00:14:35.018 "large_pool_count": 1024, 00:14:35.018 "small_bufsize": 8192, 00:14:35.018 "large_bufsize": 135168 00:14:35.018 } 00:14:35.018 } 00:14:35.018 ] 00:14:35.018 }, 00:14:35.018 { 00:14:35.018 "subsystem": "sock", 00:14:35.018 "config": [ 00:14:35.018 { 00:14:35.018 "method": "sock_set_default_impl", 00:14:35.018 "params": { 00:14:35.018 "impl_name": "uring" 00:14:35.018 } 00:14:35.018 }, 00:14:35.018 { 00:14:35.018 "method": "sock_impl_set_options", 00:14:35.018 "params": { 00:14:35.018 "impl_name": "ssl", 00:14:35.018 "recv_buf_size": 4096, 00:14:35.018 "send_buf_size": 4096, 00:14:35.018 "enable_recv_pipe": true, 00:14:35.018 "enable_quickack": false, 00:14:35.018 "enable_placement_id": 0, 00:14:35.018 "enable_zerocopy_send_server": true, 00:14:35.018 "enable_zerocopy_send_client": false, 00:14:35.018 "zerocopy_threshold": 0, 00:14:35.018 "tls_version": 0, 00:14:35.018 "enable_ktls": false 00:14:35.018 } 00:14:35.018 }, 00:14:35.018 { 00:14:35.018 "method": "sock_impl_set_options", 00:14:35.018 "params": { 00:14:35.018 "impl_name": "posix", 00:14:35.018 "recv_buf_size": 2097152, 00:14:35.018 "send_buf_size": 2097152, 00:14:35.018 "enable_recv_pipe": true, 00:14:35.018 "enable_quickack": false, 00:14:35.018 "enable_placement_id": 0, 00:14:35.018 "enable_zerocopy_send_server": true, 00:14:35.018 "enable_zerocopy_send_client": false, 00:14:35.018 "zerocopy_threshold": 0, 00:14:35.018 "tls_version": 0, 00:14:35.018 "enable_ktls": false 00:14:35.018 } 00:14:35.018 }, 00:14:35.018 { 00:14:35.018 "method": "sock_impl_set_options", 00:14:35.018 "params": { 00:14:35.018 "impl_name": "uring", 00:14:35.018 "recv_buf_size": 2097152, 00:14:35.018 "send_buf_size": 2097152, 00:14:35.018 "enable_recv_pipe": true, 00:14:35.018 "enable_quickack": false, 00:14:35.018 "enable_placement_id": 0, 00:14:35.018 "enable_zerocopy_send_server": false, 00:14:35.018 "enable_zerocopy_send_client": false, 00:14:35.018 "zerocopy_threshold": 0, 00:14:35.018 "tls_version": 0, 00:14:35.018 "enable_ktls": false 00:14:35.018 } 00:14:35.018 } 00:14:35.018 ] 00:14:35.018 }, 00:14:35.018 { 00:14:35.018 "subsystem": "vmd", 00:14:35.018 "config": [] 00:14:35.018 }, 00:14:35.018 { 00:14:35.018 "subsystem": "accel", 00:14:35.018 "config": [ 00:14:35.018 { 00:14:35.018 "method": "accel_set_options", 00:14:35.018 "params": { 00:14:35.018 "small_cache_size": 128, 00:14:35.018 "large_cache_size": 16, 00:14:35.018 "task_count": 2048, 00:14:35.018 "sequence_count": 2048, 00:14:35.018 "buf_count": 2048 00:14:35.018 } 00:14:35.018 } 00:14:35.018 ] 00:14:35.018 }, 00:14:35.018 { 00:14:35.018 "subsystem": "bdev", 00:14:35.018 "config": [ 00:14:35.018 { 00:14:35.018 "method": "bdev_set_options", 00:14:35.018 "params": { 00:14:35.018 "bdev_io_pool_size": 65535, 00:14:35.018 "bdev_io_cache_size": 256, 00:14:35.019 "bdev_auto_examine": true, 00:14:35.019 "iobuf_small_cache_size": 128, 00:14:35.019 "iobuf_large_cache_size": 16 00:14:35.019 } 00:14:35.019 }, 00:14:35.019 { 00:14:35.019 "method": "bdev_raid_set_options", 00:14:35.019 "params": { 00:14:35.019 "process_window_size_kb": 1024 00:14:35.019 } 00:14:35.019 }, 00:14:35.019 { 00:14:35.019 "method": "bdev_iscsi_set_options", 00:14:35.019 "params": { 00:14:35.019 "timeout_sec": 30 00:14:35.019 } 00:14:35.019 }, 00:14:35.019 { 00:14:35.019 "method": "bdev_nvme_set_options", 00:14:35.019 "params": { 00:14:35.019 "action_on_timeout": "none", 00:14:35.019 "timeout_us": 0, 00:14:35.019 "timeout_admin_us": 0, 00:14:35.019 "keep_alive_timeout_ms": 10000, 00:14:35.019 "arbitration_burst": 0, 00:14:35.019 "low_priority_weight": 0, 00:14:35.019 "medium_priority_weight": 0, 00:14:35.019 "high_priority_weight": 0, 00:14:35.019 "nvme_adminq_poll_period_us": 10000, 00:14:35.019 "nvme_ioq_poll_period_us": 0, 00:14:35.019 "io_queue_requests": 0, 00:14:35.019 "delay_cmd_submit": true, 00:14:35.019 "transport_retry_count": 4, 00:14:35.019 "bdev_retry_count": 3, 00:14:35.019 "transport_ack_timeout": 0, 00:14:35.019 "ctrlr_loss_timeout_sec": 0, 00:14:35.019 "reconnect_delay_sec": 0, 00:14:35.019 "fast_io_fail_timeout_sec": 0, 00:14:35.019 "disable_auto_failback": false, 00:14:35.019 "generate_uuids": false, 00:14:35.019 "transport_tos": 0, 00:14:35.019 "nvme_error_stat": false, 00:14:35.019 "rdma_srq_size": 0, 00:14:35.019 "io_path_stat": false, 00:14:35.019 "allow_accel_sequence": false, 00:14:35.019 "rdma_max_cq_size": 0, 00:14:35.019 "rdma_cm_event_timeout_ms": 0, 00:14:35.019 "dhchap_digests": [ 00:14:35.019 "sha256", 00:14:35.019 "sha384", 00:14:35.019 "sha512" 00:14:35.019 ], 00:14:35.019 "dhchap_dhgroups": [ 00:14:35.019 "null", 00:14:35.019 "ffdhe2048", 00:14:35.019 "ffdhe3072", 00:14:35.019 "ffdhe4096", 00:14:35.019 "ffdhe6144", 00:14:35.019 "ffdhe8192" 00:14:35.019 ] 00:14:35.019 } 00:14:35.019 }, 00:14:35.019 { 00:14:35.019 "method": "bdev_nvme_set_hotplug", 00:14:35.019 "params": { 00:14:35.019 "period_us": 100000, 00:14:35.019 "enable": false 00:14:35.019 } 00:14:35.019 }, 00:14:35.019 { 00:14:35.019 "method": "bdev_malloc_create", 00:14:35.019 "params": { 00:14:35.019 "name": "malloc0", 00:14:35.019 "num_blocks": 8192, 00:14:35.019 "block_size": 4096, 00:14:35.019 "physical_block_size": 4096, 00:14:35.019 "uuid": "7ba72356-34ef-4a8c-9b87-efc426c5803e", 00:14:35.019 "optimal_io_boundary": 0 00:14:35.019 } 00:14:35.019 }, 00:14:35.019 { 00:14:35.019 "method": "bdev_wait_for_examine" 00:14:35.019 } 00:14:35.019 ] 00:14:35.019 }, 00:14:35.019 { 00:14:35.019 "subsystem": "nbd", 00:14:35.019 "config": [] 00:14:35.019 }, 00:14:35.019 { 00:14:35.019 "subsystem": "scheduler", 00:14:35.019 "config": [ 00:14:35.019 { 00:14:35.019 "method": "framework_set_scheduler", 00:14:35.019 "params": { 00:14:35.019 "name": "static" 00:14:35.019 } 00:14:35.019 } 00:14:35.019 ] 00:14:35.019 }, 00:14:35.019 { 00:14:35.019 "subsystem": "nvmf", 00:14:35.019 "config": [ 00:14:35.019 { 00:14:35.019 "method": "nvmf_set_config", 00:14:35.019 "params": { 00:14:35.019 "discovery_filter": "match_any", 00:14:35.019 "admin_cmd_passthru": { 00:14:35.019 "identify_ctrlr": false 00:14:35.019 } 00:14:35.019 } 00:14:35.019 }, 00:14:35.019 { 00:14:35.019 "method": "nvmf_set_max_subsystems", 00:14:35.019 "params": { 00:14:35.019 "max_subsystems": 1024 00:14:35.019 } 00:14:35.019 }, 00:14:35.019 { 00:14:35.019 "method": "nvmf_set_crdt", 00:14:35.019 "params": { 00:14:35.019 "crdt1": 0, 00:14:35.019 "crdt2": 0, 00:14:35.019 "crdt3": 0 00:14:35.019 } 00:14:35.019 }, 00:14:35.019 { 00:14:35.019 "method": "nvmf_create_transport", 00:14:35.019 "params": { 00:14:35.019 "trtype": "TCP", 00:14:35.019 "max_queue_depth": 128, 00:14:35.019 "max_io_qpairs_per_ctrlr": 127, 00:14:35.019 "in_capsule_data_size": 4096, 00:14:35.019 "max_io_size": 131072, 00:14:35.019 "io_unit_size": 131072, 00:14:35.019 "max_aq_depth": 128, 00:14:35.019 "num_shared_buffers": 511, 00:14:35.019 "buf_cache_size": 4294967295, 00:14:35.019 "dif_insert_or_strip": false, 00:14:35.019 "zcopy": false, 00:14:35.019 "c2h_success": false, 00:14:35.019 "sock_priority": 0, 00:14:35.019 "abort_timeout_sec": 1, 00:14:35.019 "ack_timeout": 0, 00:14:35.019 "data_wr_pool_size": 0 00:14:35.019 } 00:14:35.019 }, 00:14:35.019 { 00:14:35.019 "method": "nvmf_create_subsystem", 00:14:35.019 "params": { 00:14:35.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.019 "allow_any_host": false, 00:14:35.019 "serial_number": "SPDK00000000000001", 00:14:35.019 "model_number": "SPDK bdev Controller", 00:14:35.019 "max_namespaces": 10, 00:14:35.019 "min_cntlid": 1, 00:14:35.019 "max_cntlid": 65519, 00:14:35.019 "ana_reporting": false 00:14:35.019 } 00:14:35.019 }, 00:14:35.019 { 00:14:35.019 "method": "nvmf_subsystem_add_host", 00:14:35.019 "params": { 00:14:35.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.019 "host": "nqn.2016-06.io.spdk:host1", 00:14:35.019 "psk": "/tmp/tmp.URGuEZqjnl" 00:14:35.019 } 00:14:35.019 }, 00:14:35.019 { 00:14:35.019 "method": "nvmf_subsystem_add_ns", 00:14:35.019 "params": { 00:14:35.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.019 "namespace": { 00:14:35.019 "nsid": 1, 00:14:35.019 "bdev_name": "malloc0", 00:14:35.019 "nguid": "7BA7235634EF4A8C9B87EFC426C5803E", 00:14:35.019 "uuid": "7ba72356-34ef-4a8c-9b87-efc426c5803e", 00:14:35.019 "no_auto_visible": false 00:14:35.019 } 00:14:35.019 } 00:14:35.019 }, 00:14:35.019 { 00:14:35.019 "method": "nvmf_subsystem_add_listener", 00:14:35.019 "params": { 00:14:35.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.019 "listen_address": { 00:14:35.019 "trtype": "TCP", 00:14:35.019 "adrfam": "IPv4", 00:14:35.019 "traddr": "10.0.0.2", 00:14:35.019 "trsvcid": "4420" 00:14:35.019 }, 00:14:35.019 "secure_channel": true 00:14:35.019 } 00:14:35.019 } 00:14:35.019 ] 00:14:35.019 } 00:14:35.019 ] 00:14:35.019 }' 00:14:35.019 08:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:35.588 08:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:35.588 "subsystems": [ 00:14:35.588 { 00:14:35.588 "subsystem": "keyring", 00:14:35.588 "config": [] 00:14:35.588 }, 00:14:35.588 { 00:14:35.588 "subsystem": "iobuf", 00:14:35.588 "config": [ 00:14:35.588 { 00:14:35.588 "method": "iobuf_set_options", 00:14:35.588 "params": { 00:14:35.588 "small_pool_count": 8192, 00:14:35.588 "large_pool_count": 1024, 00:14:35.588 "small_bufsize": 8192, 00:14:35.588 "large_bufsize": 135168 00:14:35.588 } 00:14:35.588 } 00:14:35.588 ] 00:14:35.588 }, 00:14:35.588 { 00:14:35.588 "subsystem": "sock", 00:14:35.588 "config": [ 00:14:35.588 { 00:14:35.588 "method": "sock_set_default_impl", 00:14:35.588 "params": { 00:14:35.588 "impl_name": "uring" 00:14:35.588 } 00:14:35.588 }, 00:14:35.588 { 00:14:35.588 "method": "sock_impl_set_options", 00:14:35.588 "params": { 00:14:35.588 "impl_name": "ssl", 00:14:35.588 "recv_buf_size": 4096, 00:14:35.588 "send_buf_size": 4096, 00:14:35.588 "enable_recv_pipe": true, 00:14:35.588 "enable_quickack": false, 00:14:35.588 "enable_placement_id": 0, 00:14:35.588 "enable_zerocopy_send_server": true, 00:14:35.588 "enable_zerocopy_send_client": false, 00:14:35.588 "zerocopy_threshold": 0, 00:14:35.588 "tls_version": 0, 00:14:35.588 "enable_ktls": false 00:14:35.588 } 00:14:35.588 }, 00:14:35.588 { 00:14:35.588 "method": "sock_impl_set_options", 00:14:35.588 "params": { 00:14:35.588 "impl_name": "posix", 00:14:35.588 "recv_buf_size": 2097152, 00:14:35.588 "send_buf_size": 2097152, 00:14:35.588 "enable_recv_pipe": true, 00:14:35.588 "enable_quickack": false, 00:14:35.588 "enable_placement_id": 0, 00:14:35.588 "enable_zerocopy_send_server": true, 00:14:35.588 "enable_zerocopy_send_client": false, 00:14:35.588 "zerocopy_threshold": 0, 00:14:35.588 "tls_version": 0, 00:14:35.588 "enable_ktls": false 00:14:35.588 } 00:14:35.588 }, 00:14:35.588 { 00:14:35.588 "method": "sock_impl_set_options", 00:14:35.588 "params": { 00:14:35.588 "impl_name": "uring", 00:14:35.588 "recv_buf_size": 2097152, 00:14:35.588 "send_buf_size": 2097152, 00:14:35.588 "enable_recv_pipe": true, 00:14:35.588 "enable_quickack": false, 00:14:35.588 "enable_placement_id": 0, 00:14:35.588 "enable_zerocopy_send_server": false, 00:14:35.588 "enable_zerocopy_send_client": false, 00:14:35.588 "zerocopy_threshold": 0, 00:14:35.589 "tls_version": 0, 00:14:35.589 "enable_ktls": false 00:14:35.589 } 00:14:35.589 } 00:14:35.589 ] 00:14:35.589 }, 00:14:35.589 { 00:14:35.589 "subsystem": "vmd", 00:14:35.589 "config": [] 00:14:35.589 }, 00:14:35.589 { 00:14:35.589 "subsystem": "accel", 00:14:35.589 "config": [ 00:14:35.589 { 00:14:35.589 "method": "accel_set_options", 00:14:35.589 "params": { 00:14:35.589 "small_cache_size": 128, 00:14:35.589 "large_cache_size": 16, 00:14:35.589 "task_count": 2048, 00:14:35.589 "sequence_count": 2048, 00:14:35.589 "buf_count": 2048 00:14:35.589 } 00:14:35.589 } 00:14:35.589 ] 00:14:35.589 }, 00:14:35.589 { 00:14:35.589 "subsystem": "bdev", 00:14:35.589 "config": [ 00:14:35.589 { 00:14:35.589 "method": "bdev_set_options", 00:14:35.589 "params": { 00:14:35.589 "bdev_io_pool_size": 65535, 00:14:35.589 "bdev_io_cache_size": 256, 00:14:35.589 "bdev_auto_examine": true, 00:14:35.589 "iobuf_small_cache_size": 128, 00:14:35.589 "iobuf_large_cache_size": 16 00:14:35.589 } 00:14:35.589 }, 00:14:35.589 { 00:14:35.589 "method": "bdev_raid_set_options", 00:14:35.589 "params": { 00:14:35.589 "process_window_size_kb": 1024 00:14:35.589 } 00:14:35.589 }, 00:14:35.589 { 00:14:35.589 "method": "bdev_iscsi_set_options", 00:14:35.589 "params": { 00:14:35.589 "timeout_sec": 30 00:14:35.589 } 00:14:35.589 }, 00:14:35.589 { 00:14:35.589 "method": "bdev_nvme_set_options", 00:14:35.589 "params": { 00:14:35.589 "action_on_timeout": "none", 00:14:35.589 "timeout_us": 0, 00:14:35.589 "timeout_admin_us": 0, 00:14:35.589 "keep_alive_timeout_ms": 10000, 00:14:35.589 "arbitration_burst": 0, 00:14:35.589 "low_priority_weight": 0, 00:14:35.589 "medium_priority_weight": 0, 00:14:35.589 "high_priority_weight": 0, 00:14:35.589 "nvme_adminq_poll_period_us": 10000, 00:14:35.589 "nvme_ioq_poll_period_us": 0, 00:14:35.589 "io_queue_requests": 512, 00:14:35.589 "delay_cmd_submit": true, 00:14:35.589 "transport_retry_count": 4, 00:14:35.589 "bdev_retry_count": 3, 00:14:35.589 "transport_ack_timeout": 0, 00:14:35.589 "ctrlr_loss_timeout_sec": 0, 00:14:35.589 "reconnect_delay_sec": 0, 00:14:35.589 "fast_io_fail_timeout_sec": 0, 00:14:35.589 "disable_auto_failback": false, 00:14:35.589 "generate_uuids": false, 00:14:35.589 "transport_tos": 0, 00:14:35.589 "nvme_error_stat": false, 00:14:35.589 "rdma_srq_size": 0, 00:14:35.589 "io_path_stat": false, 00:14:35.589 "allow_accel_sequence": false, 00:14:35.589 "rdma_max_cq_size": 0, 00:14:35.589 "rdma_cm_event_timeout_ms": 0, 00:14:35.589 "dhchap_digests": [ 00:14:35.589 "sha256", 00:14:35.589 "sha384", 00:14:35.589 "sha512" 00:14:35.589 ], 00:14:35.589 "dhchap_dhgroups": [ 00:14:35.589 "null", 00:14:35.589 "ffdhe2048", 00:14:35.589 "ffdhe3072", 00:14:35.589 "ffdhe4096", 00:14:35.589 "ffdhe6144", 00:14:35.589 "ffdhe8192" 00:14:35.589 ] 00:14:35.589 } 00:14:35.589 }, 00:14:35.589 { 00:14:35.589 "method": "bdev_nvme_attach_controller", 00:14:35.589 "params": { 00:14:35.589 "name": "TLSTEST", 00:14:35.589 "trtype": "TCP", 00:14:35.589 "adrfam": "IPv4", 00:14:35.589 "traddr": "10.0.0.2", 00:14:35.589 "trsvcid": "4420", 00:14:35.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.589 "prchk_reftag": false, 00:14:35.589 "prchk_guard": false, 00:14:35.589 "ctrlr_loss_timeout_sec": 0, 00:14:35.589 "reconnect_delay_sec": 0, 00:14:35.589 "fast_io_fail_timeout_sec": 0, 00:14:35.589 "psk": "/tmp/tmp.URGuEZqjnl", 00:14:35.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:35.589 "hdgst": false, 00:14:35.589 "ddgst": false 00:14:35.589 } 00:14:35.589 }, 00:14:35.589 { 00:14:35.589 "method": "bdev_nvme_set_hotplug", 00:14:35.589 "params": { 00:14:35.589 "period_us": 100000, 00:14:35.589 "enable": false 00:14:35.589 } 00:14:35.589 }, 00:14:35.589 { 00:14:35.589 "method": "bdev_wait_for_examine" 00:14:35.589 } 00:14:35.589 ] 00:14:35.589 }, 00:14:35.589 { 00:14:35.589 "subsystem": "nbd", 00:14:35.589 "config": [] 00:14:35.589 } 00:14:35.589 ] 00:14:35.589 }' 00:14:35.589 08:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 73686 00:14:35.589 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 73686 ']' 00:14:35.589 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 73686 00:14:35.589 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:14:35.589 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:35.589 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 73686 00:14:35.589 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:14:35.589 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:14:35.589 killing process with pid 73686 00:14:35.589 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 73686' 00:14:35.589 Received shutdown signal, test time was about 10.000000 seconds 00:14:35.589 00:14:35.589 Latency(us) 00:14:35.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.589 =================================================================================================================== 00:14:35.589 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:35.589 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 73686 00:14:35.589 [2024-06-10 08:09:57.235475] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:35.589 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 73686 00:14:35.848 08:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 73637 00:14:35.848 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 73637 ']' 00:14:35.848 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 73637 00:14:35.848 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:14:35.848 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:35.848 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 73637 00:14:35.848 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:35.848 killing process with pid 73637 00:14:35.848 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:35.848 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 73637' 00:14:35.848 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 73637 00:14:35.848 [2024-06-10 08:09:57.521981] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:35.848 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 73637 00:14:36.108 08:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:36.108 08:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:14:36.108 "subsystems": [ 00:14:36.108 { 00:14:36.108 "subsystem": "keyring", 00:14:36.108 "config": [] 00:14:36.108 }, 00:14:36.108 { 00:14:36.108 "subsystem": "iobuf", 00:14:36.108 "config": [ 00:14:36.108 { 00:14:36.108 "method": "iobuf_set_options", 00:14:36.108 "params": { 00:14:36.108 "small_pool_count": 8192, 00:14:36.108 "large_pool_count": 1024, 00:14:36.108 "small_bufsize": 8192, 00:14:36.108 "large_bufsize": 135168 00:14:36.108 } 00:14:36.108 } 00:14:36.108 ] 00:14:36.108 }, 00:14:36.108 { 00:14:36.108 "subsystem": "sock", 00:14:36.108 "config": [ 00:14:36.108 { 00:14:36.108 "method": "sock_set_default_impl", 00:14:36.108 "params": { 00:14:36.108 "impl_name": "uring" 00:14:36.108 } 00:14:36.108 }, 00:14:36.108 { 00:14:36.108 "method": "sock_impl_set_options", 00:14:36.108 "params": { 00:14:36.108 "impl_name": "ssl", 00:14:36.108 "recv_buf_size": 4096, 00:14:36.108 "send_buf_size": 4096, 00:14:36.108 "enable_recv_pipe": true, 00:14:36.108 "enable_quickack": false, 00:14:36.108 "enable_placement_id": 0, 00:14:36.108 "enable_zerocopy_send_server": true, 00:14:36.108 "enable_zerocopy_send_client": false, 00:14:36.108 "zerocopy_threshold": 0, 00:14:36.108 "tls_version": 0, 00:14:36.108 "enable_ktls": false 00:14:36.108 } 00:14:36.108 }, 00:14:36.108 { 00:14:36.108 "method": "sock_impl_set_options", 00:14:36.108 "params": { 00:14:36.108 "impl_name": "posix", 00:14:36.108 "recv_buf_size": 2097152, 00:14:36.108 "send_buf_size": 2097152, 00:14:36.108 "enable_recv_pipe": true, 00:14:36.108 "enable_quickack": false, 00:14:36.108 "enable_placement_id": 0, 00:14:36.108 "enable_zerocopy_send_server": true, 00:14:36.108 "enable_zerocopy_send_client": false, 00:14:36.108 "zerocopy_threshold": 0, 00:14:36.108 "tls_version": 0, 00:14:36.108 "enable_ktls": false 00:14:36.108 } 00:14:36.108 }, 00:14:36.108 { 00:14:36.108 "method": "sock_impl_set_options", 00:14:36.108 "params": { 00:14:36.108 "impl_name": "uring", 00:14:36.108 "recv_buf_size": 2097152, 00:14:36.108 "send_buf_size": 2097152, 00:14:36.108 "enable_recv_pipe": true, 00:14:36.108 "enable_quickack": false, 00:14:36.108 "enable_placement_id": 0, 00:14:36.108 "enable_zerocopy_send_server": false, 00:14:36.108 "enable_zerocopy_send_client": false, 00:14:36.108 "zerocopy_threshold": 0, 00:14:36.108 "tls_version": 0, 00:14:36.108 "enable_ktls": false 00:14:36.108 } 00:14:36.108 } 00:14:36.108 ] 00:14:36.108 }, 00:14:36.108 { 00:14:36.108 "subsystem": "vmd", 00:14:36.108 "config": [] 00:14:36.108 }, 00:14:36.108 { 00:14:36.108 "subsystem": "accel", 00:14:36.108 "config": [ 00:14:36.108 { 00:14:36.108 "method": "accel_set_options", 00:14:36.108 "params": { 00:14:36.108 "small_cache_size": 128, 00:14:36.108 "large_cache_size": 16, 00:14:36.108 "task_count": 2048, 00:14:36.108 "sequence_count": 2048, 00:14:36.108 "buf_count": 2048 00:14:36.108 } 00:14:36.108 } 00:14:36.108 ] 00:14:36.108 }, 00:14:36.108 { 00:14:36.108 "subsystem": "bdev", 00:14:36.108 "config": [ 00:14:36.108 { 00:14:36.108 "method": "bdev_set_options", 00:14:36.108 "params": { 00:14:36.108 "bdev_io_pool_size": 65535, 00:14:36.108 "bdev_io_cache_size": 256, 00:14:36.108 "bdev_auto_examine": true, 00:14:36.108 "iobuf_small_cache_size": 128, 00:14:36.108 "iobuf_large_cache_size": 16 00:14:36.108 } 00:14:36.108 }, 00:14:36.108 { 00:14:36.108 "method": "bdev_raid_set_options", 00:14:36.108 "params": { 00:14:36.108 "process_window_size_kb": 1024 00:14:36.108 } 00:14:36.108 }, 00:14:36.108 { 00:14:36.108 "method": "bdev_iscsi_set_options", 00:14:36.108 "params": { 00:14:36.108 "timeout_sec": 30 00:14:36.108 } 00:14:36.108 }, 00:14:36.108 { 00:14:36.108 "method": "bdev_nvme_set_options", 00:14:36.108 "params": { 00:14:36.108 "action_on_timeout": "none", 00:14:36.108 "timeout_us": 0, 00:14:36.108 "timeout_admin_us": 0, 00:14:36.108 "keep_alive_timeout_ms": 10000, 00:14:36.108 "arbitration_burst": 0, 00:14:36.108 "low_priority_weight": 0, 00:14:36.108 "medium_priority_weight": 0, 00:14:36.108 "high_priority_weight": 0, 00:14:36.108 "nvme_adminq_poll_period_us": 10000, 00:14:36.108 "nvme_ioq_poll_period_us": 0, 00:14:36.108 "io_queue_requests": 0, 00:14:36.108 "delay_cmd_submit": true, 00:14:36.108 "transport_retry_count": 4, 00:14:36.108 "bdev_retry_count": 3, 00:14:36.108 "transport_ack_timeout": 0, 00:14:36.108 "ctrlr_loss_timeout_sec": 0, 00:14:36.108 "reconnect_delay_sec": 0, 00:14:36.108 "fast_io_fail_timeout_sec": 0, 00:14:36.108 "disable_auto_failback": false, 00:14:36.108 "generate_uuids": false, 00:14:36.108 "transport_tos": 0, 00:14:36.108 "nvme_error_stat": false, 00:14:36.108 "rdma_srq_size": 0, 00:14:36.108 "io_path_stat": false, 00:14:36.108 "allow_accel_sequence": false, 00:14:36.108 "rdma_max_cq_size": 0, 00:14:36.108 "rdma_cm_event_timeout_ms": 0, 00:14:36.108 "dhchap_digests": [ 00:14:36.108 "sha256", 00:14:36.108 "sha384", 00:14:36.108 "sha512" 00:14:36.108 ], 00:14:36.108 "dhchap_dhgroups": [ 00:14:36.108 "null", 00:14:36.108 "ffdhe2048", 00:14:36.108 "ffdhe3072", 00:14:36.108 "ffdhe4096", 00:14:36.108 "ffdhe6144", 00:14:36.108 "ffdhe8192" 00:14:36.108 ] 00:14:36.108 } 00:14:36.108 }, 00:14:36.108 { 00:14:36.109 "method": "bdev_nvme_set_hotplug", 00:14:36.109 "params": { 00:14:36.109 "period_us": 100000, 00:14:36.109 "enable": false 00:14:36.109 } 00:14:36.109 }, 00:14:36.109 { 00:14:36.109 "method": "bdev_malloc_create", 00:14:36.109 "params": { 00:14:36.109 "name": "malloc0", 00:14:36.109 "num_blocks": 8192, 00:14:36.109 "block_size": 4096, 00:14:36.109 "physical_block_size": 4096, 00:14:36.109 "uuid": "7ba72356-34ef-4a8c-9b87-efc426c5803e", 00:14:36.109 "optimal_io_boundary": 0 00:14:36.109 } 00:14:36.109 }, 00:14:36.109 { 00:14:36.109 "method": "bdev_wait_for_examine" 00:14:36.109 } 00:14:36.109 ] 00:14:36.109 }, 00:14:36.109 { 00:14:36.109 "subsystem": "nbd", 00:14:36.109 "config": [] 00:14:36.109 }, 00:14:36.109 { 00:14:36.109 "subsystem": "scheduler", 00:14:36.109 "config": [ 00:14:36.109 { 00:14:36.109 "method": "framework_set_scheduler", 00:14:36.109 "params": { 00:14:36.109 "name": "static" 00:14:36.109 } 00:14:36.109 } 00:14:36.109 ] 00:14:36.109 }, 00:14:36.109 { 00:14:36.109 "subsystem": "nvmf", 00:14:36.109 "config": [ 00:14:36.109 { 00:14:36.109 "method": "nvmf_set_config", 00:14:36.109 "params": { 00:14:36.109 "discovery_filter": "match_any", 00:14:36.109 "admin_cmd_passthru": { 00:14:36.109 "identify_ctrlr": false 00:14:36.109 } 00:14:36.109 } 00:14:36.109 }, 00:14:36.109 { 00:14:36.109 "method": "nvmf_set_max_subsystems", 00:14:36.109 "params": { 00:14:36.109 "max_subsystems": 1024 00:14:36.109 } 00:14:36.109 }, 00:14:36.109 { 00:14:36.109 "method": "nvmf_set_crdt", 00:14:36.109 "params": { 00:14:36.109 "crdt1": 0, 00:14:36.109 "crdt2": 0, 00:14:36.109 "crdt3": 0 00:14:36.109 } 00:14:36.109 }, 00:14:36.109 { 00:14:36.109 "method": "nvmf_create_transport", 00:14:36.109 "params": { 00:14:36.109 "trtype": "TCP", 00:14:36.109 "max_queue_depth": 128, 00:14:36.109 "max_io_qpairs_per_ctrlr": 127, 00:14:36.109 "in_capsule_data_size": 4096, 00:14:36.109 "max_io_size": 131072, 00:14:36.109 "io_unit_size": 131072, 00:14:36.109 "max_aq_depth": 128, 00:14:36.109 "num_shared_buffers": 511, 00:14:36.109 "buf_cache_size": 4294967295, 00:14:36.109 "dif_insert_or_strip": false, 00:14:36.109 "zcopy": false, 00:14:36.109 "c2h_success": false, 00:14:36.109 "sock_priority": 0, 00:14:36.109 "abort_timeout_sec": 1, 00:14:36.109 "ack_timeout": 0, 00:14:36.109 "data_wr_pool_size": 0 00:14:36.109 } 00:14:36.109 }, 00:14:36.109 { 00:14:36.109 "method": "nvmf_create_subsystem", 00:14:36.109 "params": { 00:14:36.109 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.109 "allow_any_host": false, 00:14:36.109 "serial_number": "SPDK00000000000001", 00:14:36.109 "model_number": "SPDK bdev Controller", 00:14:36.109 "max_namespaces": 10, 00:14:36.109 "min_cntlid": 1, 00:14:36.109 "max_cntlid": 65519, 00:14:36.109 "ana_reporting": false 00:14:36.109 } 00:14:36.109 }, 00:14:36.109 { 00:14:36.109 "method": "nvmf_subsystem_add_host", 00:14:36.109 "params": { 00:14:36.109 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.109 "host": "nqn.2016-06.io.spdk:host1", 00:14:36.109 "psk": "/tmp/tmp.URGuEZqjnl" 00:14:36.109 } 00:14:36.109 }, 00:14:36.109 { 00:14:36.109 "method": "nvmf_subsystem_add_ns", 00:14:36.109 "params": { 00:14:36.109 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.109 "namespace": { 00:14:36.109 "nsid": 1, 00:14:36.109 "bdev_name": "malloc0", 00:14:36.109 "nguid": "7BA7235634EF4A8C9B87EFC426C5803E", 00:14:36.109 "uuid": "7ba72356-34ef-4a8c-9b87-efc426c5803e", 00:14:36.109 "no_auto_visible": false 00:14:36.109 } 00:14:36.109 } 00:14:36.109 }, 00:14:36.109 { 00:14:36.109 "method": "nvmf_subsystem_add_listener", 00:14:36.109 "params": { 00:14:36.109 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.109 "listen_address": { 00:14:36.109 "trtype": "TCP", 00:14:36.109 "adrfam": "IPv4", 00:14:36.109 "traddr": "10.0.0.2", 00:14:36.109 "trsvcid": "4420" 00:14:36.109 }, 00:14:36.109 "secure_channel": true 00:14:36.109 } 00:14:36.109 } 00:14:36.109 ] 00:14:36.109 } 00:14:36.109 ] 00:14:36.109 }' 00:14:36.109 08:09:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:36.109 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:36.109 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:36.109 08:09:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73739 00:14:36.109 08:09:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73739 00:14:36.109 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 73739 ']' 00:14:36.109 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.109 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:36.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.109 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.109 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:36.109 08:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:36.109 08:09:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:36.109 [2024-06-10 08:09:57.835471] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:14:36.109 [2024-06-10 08:09:57.836418] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.368 [2024-06-10 08:09:57.974754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.368 [2024-06-10 08:09:58.089114] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.368 [2024-06-10 08:09:58.089173] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.368 [2024-06-10 08:09:58.089184] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.368 [2024-06-10 08:09:58.089193] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.368 [2024-06-10 08:09:58.089200] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.368 [2024-06-10 08:09:58.089294] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.627 [2024-06-10 08:09:58.255938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:36.627 [2024-06-10 08:09:58.325512] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.627 [2024-06-10 08:09:58.341438] tcp.c:3707:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:36.627 [2024-06-10 08:09:58.357662] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:36.627 [2024-06-10 08:09:58.357926] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.196 08:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:37.196 08:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:14:37.196 08:09:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:37.196 08:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:37.196 08:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.196 08:09:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.196 08:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=73767 00:14:37.196 08:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 73767 /var/tmp/bdevperf.sock 00:14:37.196 08:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 73767 ']' 00:14:37.196 08:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:37.196 08:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:37.196 08:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:37.196 08:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:14:37.196 "subsystems": [ 00:14:37.196 { 00:14:37.196 "subsystem": "keyring", 00:14:37.196 "config": [] 00:14:37.196 }, 00:14:37.196 { 00:14:37.196 "subsystem": "iobuf", 00:14:37.196 "config": [ 00:14:37.196 { 00:14:37.196 "method": "iobuf_set_options", 00:14:37.196 "params": { 00:14:37.196 "small_pool_count": 8192, 00:14:37.196 "large_pool_count": 1024, 00:14:37.196 "small_bufsize": 8192, 00:14:37.196 "large_bufsize": 135168 00:14:37.196 } 00:14:37.196 } 00:14:37.196 ] 00:14:37.196 }, 00:14:37.196 { 00:14:37.196 "subsystem": "sock", 00:14:37.196 "config": [ 00:14:37.196 { 00:14:37.196 "method": "sock_set_default_impl", 00:14:37.196 "params": { 00:14:37.196 "impl_name": "uring" 00:14:37.196 } 00:14:37.196 }, 00:14:37.196 { 00:14:37.196 "method": "sock_impl_set_options", 00:14:37.196 "params": { 00:14:37.196 "impl_name": "ssl", 00:14:37.196 "recv_buf_size": 4096, 00:14:37.196 "send_buf_size": 4096, 00:14:37.196 "enable_recv_pipe": true, 00:14:37.196 "enable_quickack": false, 00:14:37.196 "enable_placement_id": 0, 00:14:37.196 "enable_zerocopy_send_server": true, 00:14:37.196 "enable_zerocopy_send_client": false, 00:14:37.196 "zerocopy_threshold": 0, 00:14:37.196 "tls_version": 0, 00:14:37.196 "enable_ktls": false 00:14:37.196 } 00:14:37.196 }, 00:14:37.196 { 00:14:37.196 "method": "sock_impl_set_options", 00:14:37.196 "params": { 00:14:37.196 "impl_name": "posix", 00:14:37.196 "recv_buf_size": 2097152, 00:14:37.196 "send_buf_size": 2097152, 00:14:37.196 "enable_recv_pipe": true, 00:14:37.196 "enable_quickack": false, 00:14:37.196 "enable_placement_id": 0, 00:14:37.196 "enable_zerocopy_send_server": true, 00:14:37.196 "enable_zerocopy_send_client": false, 00:14:37.196 "zerocopy_threshold": 0, 00:14:37.196 "tls_version": 0, 00:14:37.196 "enable_ktls": false 00:14:37.196 } 00:14:37.196 }, 00:14:37.196 { 00:14:37.196 "method": "sock_impl_set_options", 00:14:37.196 "params": { 00:14:37.196 "impl_name": "uring", 00:14:37.196 "recv_buf_size": 2097152, 00:14:37.196 "send_buf_size": 2097152, 00:14:37.196 "enable_recv_pipe": true, 00:14:37.196 "enable_quickack": false, 00:14:37.196 "enable_placement_id": 0, 00:14:37.196 "enable_zerocopy_send_server": false, 00:14:37.196 "enable_zerocopy_send_client": false, 00:14:37.196 "zerocopy_threshold": 0, 00:14:37.196 "tls_version": 0, 00:14:37.196 "enable_ktls": false 00:14:37.196 } 00:14:37.196 } 00:14:37.196 ] 00:14:37.196 }, 00:14:37.196 { 00:14:37.196 "subsystem": "vmd", 00:14:37.196 "config": [] 00:14:37.196 }, 00:14:37.196 { 00:14:37.196 "subsystem": "accel", 00:14:37.196 "config": [ 00:14:37.196 { 00:14:37.196 "method": "accel_set_options", 00:14:37.196 "params": { 00:14:37.196 "small_cache_size": 128, 00:14:37.196 "large_cache_size": 16, 00:14:37.196 "task_count": 2048, 00:14:37.196 "sequence_count": 2048, 00:14:37.196 "buf_count": 2048 00:14:37.196 } 00:14:37.196 } 00:14:37.196 ] 00:14:37.196 }, 00:14:37.196 { 00:14:37.196 "subsystem": "bdev", 00:14:37.196 "config": [ 00:14:37.196 { 00:14:37.196 "method": "bdev_set_options", 00:14:37.196 "params": { 00:14:37.196 "bdev_io_pool_size": 65535, 00:14:37.196 "bdev_io_cache_size": 256, 00:14:37.196 "bdev_auto_examine": true, 00:14:37.196 "iobuf_small_cache_size": 128, 00:14:37.196 "iobuf_large_cache_size": 16 00:14:37.196 } 00:14:37.196 }, 00:14:37.196 { 00:14:37.196 "method": "bdev_raid_set_options", 00:14:37.196 "params": { 00:14:37.196 "process_window_size_kb": 1024 00:14:37.196 } 00:14:37.196 }, 00:14:37.196 { 00:14:37.196 "method": "bdev_iscsi_set_options", 00:14:37.196 "params": { 00:14:37.196 "timeout_sec": 30 00:14:37.196 } 00:14:37.196 }, 00:14:37.196 { 00:14:37.196 "method": "bdev_nvme_set_options", 00:14:37.196 "params": { 00:14:37.196 "action_on_timeout": "none", 00:14:37.196 "timeout_us": 0, 00:14:37.196 "timeout_admin_us": 0, 00:14:37.196 "keep_alive_timeout_ms": 10000, 00:14:37.196 "arbitration_burst": 0, 00:14:37.196 "low_priority_weight": 0, 00:14:37.196 "medium_priority_weight": 0, 00:14:37.196 "high_priority_weight": 0, 00:14:37.196 "nvme_adminq_poll_period_us": 10000, 00:14:37.196 "nvme_ioq_poll_period_us": 0, 00:14:37.196 "io_queue_requests": 512, 00:14:37.196 "delay_cmd_submit": true, 00:14:37.196 "transport_retry_count": 4, 00:14:37.196 "bdev_retry_count": 3, 00:14:37.196 "transport_ack_timeout": 0, 00:14:37.196 "ctrlr_loss_timeout_sec": 0, 00:14:37.196 "reconnect_delay_sec": 0, 00:14:37.196 "fast_io_fail_timeout_sec": 0, 00:14:37.196 "disable_auto_failback": false, 00:14:37.196 "generate_uuids": false, 00:14:37.196 "transport_tos": 0, 00:14:37.196 "nvme_error_stat": false, 00:14:37.196 "rdma_srq_size": 0, 00:14:37.196 "io_path_stat": false, 00:14:37.197 "allow_accel_sequence": false, 00:14:37.197 "rdma_max_cq_size": 0, 00:14:37.197 "rdma_cm_event_timeout_ms": 0, 00:14:37.197 "dhchap_digests": [ 00:14:37.197 "sha256", 00:14:37.197 "sha384", 00:14:37.197 "sha512" 00:14:37.197 ], 00:14:37.197 "dhchap_dhgroups": [ 00:14:37.197 "null", 00:14:37.197 "ffdhe2048", 00:14:37.197 "ffdhe3072", 00:14:37.197 "ffdhe4096", 00:14:37.197 "ffdhe6144", 00:14:37.197 "ffdhe8192" 00:14:37.197 ] 00:14:37.197 } 00:14:37.197 }, 00:14:37.197 { 00:14:37.197 "method": "bdev_nvme_attach_controller", 00:14:37.197 "params": { 00:14:37.197 "name": "TLSTEST", 00:14:37.197 "trtype": "TCP", 00:14:37.197 "adrfam": "IPv4", 00:14:37.197 "traddr": "10.0.0.2", 00:14:37.197 "trsvcid": "4420", 00:14:37.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.197 "prchk_reftag": false, 00:14:37.197 "prchk_guard": false, 00:14:37.197 "ctrlr_loss_timeout_sec": 0, 00:14:37.197 "reconnect_delay_sec": 0, 00:14:37.197 "fast_io_fail_timeout_sec": 0, 00:14:37.197 "psk": "/tmp/tmp.URGuEZqjnl", 00:14:37.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:37.197 "hdgst": false, 00:14:37.197 "ddgst": false 00:14:37.197 } 00:14:37.197 }, 00:14:37.197 { 00:14:37.197 "method": "bdev_nvme_set_hotplug", 00:14:37.197 "params": { 00:14:37.197 "period_us": 100000, 00:14:37.197 "enable": false 00:14:37.197 } 00:14:37.197 }, 00:14:37.197 { 00:14:37.197 "method": "bdev_wait_for_examine" 00:14:37.197 } 00:14:37.197 ] 00:14:37.197 }, 00:14:37.197 { 00:14:37.197 "subsystem": "nbd", 00:14:37.197 "config": [] 00:14:37.197 } 00:14:37.197 ] 00:14:37.197 }' 00:14:37.197 08:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:37.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:37.197 08:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:37.197 08:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.197 [2024-06-10 08:09:58.891342] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:14:37.197 [2024-06-10 08:09:58.891443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73767 ] 00:14:37.197 [2024-06-10 08:09:59.025700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.455 [2024-06-10 08:09:59.154766] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.456 [2024-06-10 08:09:59.304630] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:37.714 [2024-06-10 08:09:59.346691] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:37.714 [2024-06-10 08:09:59.346838] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:38.282 08:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:38.282 08:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:14:38.282 08:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:38.282 Running I/O for 10 seconds... 00:14:48.265 00:14:48.265 Latency(us) 00:14:48.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.265 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:48.265 Verification LBA range: start 0x0 length 0x2000 00:14:48.265 TLSTESTn1 : 10.03 3909.66 15.27 0.00 0.00 32663.48 10128.29 24784.52 00:14:48.265 =================================================================================================================== 00:14:48.265 Total : 3909.66 15.27 0.00 0.00 32663.48 10128.29 24784.52 00:14:48.265 0 00:14:48.265 08:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:48.265 08:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 73767 00:14:48.265 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 73767 ']' 00:14:48.265 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 73767 00:14:48.265 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:14:48.265 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:48.265 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 73767 00:14:48.265 killing process with pid 73767 00:14:48.265 Received shutdown signal, test time was about 10.000000 seconds 00:14:48.265 00:14:48.265 Latency(us) 00:14:48.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.265 =================================================================================================================== 00:14:48.265 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:48.265 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:14:48.265 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:14:48.265 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 73767' 00:14:48.265 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 73767 00:14:48.265 [2024-06-10 08:10:10.029166] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:48.265 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 73767 00:14:48.524 08:10:10 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 73739 00:14:48.524 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 73739 ']' 00:14:48.524 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 73739 00:14:48.524 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:14:48.524 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:48.524 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 73739 00:14:48.524 killing process with pid 73739 00:14:48.524 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:48.524 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:48.524 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 73739' 00:14:48.524 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 73739 00:14:48.524 [2024-06-10 08:10:10.334365] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:48.524 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 73739 00:14:48.784 08:10:10 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:48.784 08:10:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:48.784 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:48.784 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:48.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.784 08:10:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73905 00:14:48.784 08:10:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:48.784 08:10:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73905 00:14:48.784 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 73905 ']' 00:14:48.784 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.784 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:48.784 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.784 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:48.784 08:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:48.784 [2024-06-10 08:10:10.643059] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:14:48.784 [2024-06-10 08:10:10.644130] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.042 [2024-06-10 08:10:10.791291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.301 [2024-06-10 08:10:10.919979] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.301 [2024-06-10 08:10:10.920046] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.301 [2024-06-10 08:10:10.920061] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.301 [2024-06-10 08:10:10.920071] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.301 [2024-06-10 08:10:10.920086] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.301 [2024-06-10 08:10:10.920123] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.301 [2024-06-10 08:10:10.977441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:49.867 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:49.867 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:14:49.867 08:10:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:49.867 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:49.867 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.867 08:10:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.867 08:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.URGuEZqjnl 00:14:49.867 08:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.URGuEZqjnl 00:14:49.867 08:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:50.125 [2024-06-10 08:10:11.926681] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.125 08:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:50.384 08:10:12 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:50.642 [2024-06-10 08:10:12.398760] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:50.642 [2024-06-10 08:10:12.399030] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.642 08:10:12 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:50.900 malloc0 00:14:50.900 08:10:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:51.157 08:10:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.URGuEZqjnl 00:14:51.415 [2024-06-10 08:10:13.113657] tcp.c:3707:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:51.415 08:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=73961 00:14:51.415 08:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:51.415 08:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:51.416 08:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 73961 /var/tmp/bdevperf.sock 00:14:51.416 08:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 73961 ']' 00:14:51.416 08:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:51.416 08:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:51.416 08:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:51.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:51.416 08:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:51.416 08:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:51.416 [2024-06-10 08:10:13.189753] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:14:51.416 [2024-06-10 08:10:13.189927] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73961 ] 00:14:51.675 [2024-06-10 08:10:13.332340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.675 [2024-06-10 08:10:13.478938] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.675 [2024-06-10 08:10:13.537037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:52.609 08:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:52.609 08:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:14:52.609 08:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.URGuEZqjnl 00:14:52.609 08:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:52.867 [2024-06-10 08:10:14.621547] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:52.867 nvme0n1 00:14:52.867 08:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:53.126 Running I/O for 1 seconds... 00:14:54.073 00:14:54.073 Latency(us) 00:14:54.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.073 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:54.073 Verification LBA range: start 0x0 length 0x2000 00:14:54.073 nvme0n1 : 1.02 3940.47 15.39 0.00 0.00 32117.85 5540.77 23592.96 00:14:54.073 =================================================================================================================== 00:14:54.073 Total : 3940.47 15.39 0.00 0.00 32117.85 5540.77 23592.96 00:14:54.073 0 00:14:54.073 08:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 73961 00:14:54.073 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 73961 ']' 00:14:54.073 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 73961 00:14:54.073 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:14:54.073 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:54.073 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 73961 00:14:54.073 killing process with pid 73961 00:14:54.073 Received shutdown signal, test time was about 1.000000 seconds 00:14:54.073 00:14:54.073 Latency(us) 00:14:54.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.073 =================================================================================================================== 00:14:54.073 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:54.073 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:54.073 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:54.073 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 73961' 00:14:54.073 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 73961 00:14:54.073 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 73961 00:14:54.332 08:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 73905 00:14:54.332 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 73905 ']' 00:14:54.332 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 73905 00:14:54.332 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:14:54.332 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:54.332 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 73905 00:14:54.332 killing process with pid 73905 00:14:54.332 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:54.332 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:54.332 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 73905' 00:14:54.332 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 73905 00:14:54.332 [2024-06-10 08:10:16.154620] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:54.332 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 73905 00:14:54.899 08:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:14:54.899 08:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:54.899 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:54.899 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:54.899 08:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74012 00:14:54.899 08:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:54.899 08:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74012 00:14:54.899 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 74012 ']' 00:14:54.899 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.899 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:54.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.899 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.899 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:54.899 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:54.899 [2024-06-10 08:10:16.510882] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:14:54.899 [2024-06-10 08:10:16.510992] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.899 [2024-06-10 08:10:16.646270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.899 [2024-06-10 08:10:16.757733] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.899 [2024-06-10 08:10:16.757798] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.899 [2024-06-10 08:10:16.757819] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.899 [2024-06-10 08:10:16.757827] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.899 [2024-06-10 08:10:16.757834] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.899 [2024-06-10 08:10:16.757857] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.157 [2024-06-10 08:10:16.830976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:55.750 08:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:55.750 08:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:14:55.750 08:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:55.750 08:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:55.750 08:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.750 08:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.750 08:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:14:55.750 08:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:55.750 08:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.750 [2024-06-10 08:10:17.541905] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.750 malloc0 00:14:55.750 [2024-06-10 08:10:17.575593] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:55.750 [2024-06-10 08:10:17.575838] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:55.750 08:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:55.750 08:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=74044 00:14:55.750 08:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:55.750 08:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 74044 /var/tmp/bdevperf.sock 00:14:55.750 08:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 74044 ']' 00:14:55.750 08:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:55.750 08:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:55.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:55.750 08:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:55.750 08:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:55.750 08:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.008 [2024-06-10 08:10:17.649154] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:14:56.008 [2024-06-10 08:10:17.649270] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74044 ] 00:14:56.008 [2024-06-10 08:10:17.783539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.267 [2024-06-10 08:10:17.921344] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.267 [2024-06-10 08:10:17.980532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:56.835 08:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:56.835 08:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:14:56.835 08:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.URGuEZqjnl 00:14:57.095 08:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:57.354 [2024-06-10 08:10:19.074870] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:57.354 nvme0n1 00:14:57.354 08:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:57.613 Running I/O for 1 seconds... 00:14:58.551 00:14:58.551 Latency(us) 00:14:58.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.551 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:58.551 Verification LBA range: start 0x0 length 0x2000 00:14:58.551 nvme0n1 : 1.01 4317.53 16.87 0.00 0.00 29403.34 4110.89 22520.55 00:14:58.551 =================================================================================================================== 00:14:58.551 Total : 4317.53 16.87 0.00 0.00 29403.34 4110.89 22520.55 00:14:58.551 0 00:14:58.551 08:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:14:58.551 08:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:58.551 08:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:58.811 08:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:58.811 08:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:14:58.811 "subsystems": [ 00:14:58.811 { 00:14:58.811 "subsystem": "keyring", 00:14:58.811 "config": [ 00:14:58.811 { 00:14:58.811 "method": "keyring_file_add_key", 00:14:58.811 "params": { 00:14:58.811 "name": "key0", 00:14:58.811 "path": "/tmp/tmp.URGuEZqjnl" 00:14:58.811 } 00:14:58.811 } 00:14:58.811 ] 00:14:58.811 }, 00:14:58.811 { 00:14:58.811 "subsystem": "iobuf", 00:14:58.811 "config": [ 00:14:58.811 { 00:14:58.811 "method": "iobuf_set_options", 00:14:58.811 "params": { 00:14:58.811 "small_pool_count": 8192, 00:14:58.811 "large_pool_count": 1024, 00:14:58.811 "small_bufsize": 8192, 00:14:58.811 "large_bufsize": 135168 00:14:58.811 } 00:14:58.811 } 00:14:58.811 ] 00:14:58.811 }, 00:14:58.811 { 00:14:58.811 "subsystem": "sock", 00:14:58.811 "config": [ 00:14:58.811 { 00:14:58.811 "method": "sock_set_default_impl", 00:14:58.811 "params": { 00:14:58.811 "impl_name": "uring" 00:14:58.811 } 00:14:58.811 }, 00:14:58.811 { 00:14:58.811 "method": "sock_impl_set_options", 00:14:58.811 "params": { 00:14:58.811 "impl_name": "ssl", 00:14:58.811 "recv_buf_size": 4096, 00:14:58.811 "send_buf_size": 4096, 00:14:58.811 "enable_recv_pipe": true, 00:14:58.811 "enable_quickack": false, 00:14:58.811 "enable_placement_id": 0, 00:14:58.811 "enable_zerocopy_send_server": true, 00:14:58.811 "enable_zerocopy_send_client": false, 00:14:58.811 "zerocopy_threshold": 0, 00:14:58.811 "tls_version": 0, 00:14:58.811 "enable_ktls": false 00:14:58.811 } 00:14:58.811 }, 00:14:58.811 { 00:14:58.811 "method": "sock_impl_set_options", 00:14:58.811 "params": { 00:14:58.811 "impl_name": "posix", 00:14:58.811 "recv_buf_size": 2097152, 00:14:58.811 "send_buf_size": 2097152, 00:14:58.811 "enable_recv_pipe": true, 00:14:58.811 "enable_quickack": false, 00:14:58.811 "enable_placement_id": 0, 00:14:58.811 "enable_zerocopy_send_server": true, 00:14:58.811 "enable_zerocopy_send_client": false, 00:14:58.811 "zerocopy_threshold": 0, 00:14:58.811 "tls_version": 0, 00:14:58.811 "enable_ktls": false 00:14:58.811 } 00:14:58.811 }, 00:14:58.811 { 00:14:58.811 "method": "sock_impl_set_options", 00:14:58.811 "params": { 00:14:58.811 "impl_name": "uring", 00:14:58.811 "recv_buf_size": 2097152, 00:14:58.811 "send_buf_size": 2097152, 00:14:58.811 "enable_recv_pipe": true, 00:14:58.811 "enable_quickack": false, 00:14:58.811 "enable_placement_id": 0, 00:14:58.811 "enable_zerocopy_send_server": false, 00:14:58.811 "enable_zerocopy_send_client": false, 00:14:58.811 "zerocopy_threshold": 0, 00:14:58.811 "tls_version": 0, 00:14:58.811 "enable_ktls": false 00:14:58.811 } 00:14:58.811 } 00:14:58.811 ] 00:14:58.811 }, 00:14:58.811 { 00:14:58.811 "subsystem": "vmd", 00:14:58.811 "config": [] 00:14:58.811 }, 00:14:58.811 { 00:14:58.811 "subsystem": "accel", 00:14:58.811 "config": [ 00:14:58.811 { 00:14:58.811 "method": "accel_set_options", 00:14:58.811 "params": { 00:14:58.811 "small_cache_size": 128, 00:14:58.811 "large_cache_size": 16, 00:14:58.811 "task_count": 2048, 00:14:58.811 "sequence_count": 2048, 00:14:58.811 "buf_count": 2048 00:14:58.811 } 00:14:58.811 } 00:14:58.811 ] 00:14:58.811 }, 00:14:58.811 { 00:14:58.811 "subsystem": "bdev", 00:14:58.811 "config": [ 00:14:58.811 { 00:14:58.811 "method": "bdev_set_options", 00:14:58.811 "params": { 00:14:58.811 "bdev_io_pool_size": 65535, 00:14:58.811 "bdev_io_cache_size": 256, 00:14:58.811 "bdev_auto_examine": true, 00:14:58.811 "iobuf_small_cache_size": 128, 00:14:58.811 "iobuf_large_cache_size": 16 00:14:58.811 } 00:14:58.811 }, 00:14:58.811 { 00:14:58.811 "method": "bdev_raid_set_options", 00:14:58.811 "params": { 00:14:58.811 "process_window_size_kb": 1024 00:14:58.811 } 00:14:58.811 }, 00:14:58.811 { 00:14:58.811 "method": "bdev_iscsi_set_options", 00:14:58.811 "params": { 00:14:58.811 "timeout_sec": 30 00:14:58.811 } 00:14:58.811 }, 00:14:58.811 { 00:14:58.811 "method": "bdev_nvme_set_options", 00:14:58.811 "params": { 00:14:58.811 "action_on_timeout": "none", 00:14:58.811 "timeout_us": 0, 00:14:58.811 "timeout_admin_us": 0, 00:14:58.811 "keep_alive_timeout_ms": 10000, 00:14:58.811 "arbitration_burst": 0, 00:14:58.811 "low_priority_weight": 0, 00:14:58.811 "medium_priority_weight": 0, 00:14:58.811 "high_priority_weight": 0, 00:14:58.811 "nvme_adminq_poll_period_us": 10000, 00:14:58.811 "nvme_ioq_poll_period_us": 0, 00:14:58.811 "io_queue_requests": 0, 00:14:58.811 "delay_cmd_submit": true, 00:14:58.811 "transport_retry_count": 4, 00:14:58.811 "bdev_retry_count": 3, 00:14:58.811 "transport_ack_timeout": 0, 00:14:58.811 "ctrlr_loss_timeout_sec": 0, 00:14:58.811 "reconnect_delay_sec": 0, 00:14:58.811 "fast_io_fail_timeout_sec": 0, 00:14:58.811 "disable_auto_failback": false, 00:14:58.811 "generate_uuids": false, 00:14:58.811 "transport_tos": 0, 00:14:58.811 "nvme_error_stat": false, 00:14:58.811 "rdma_srq_size": 0, 00:14:58.811 "io_path_stat": false, 00:14:58.811 "allow_accel_sequence": false, 00:14:58.811 "rdma_max_cq_size": 0, 00:14:58.811 "rdma_cm_event_timeout_ms": 0, 00:14:58.811 "dhchap_digests": [ 00:14:58.811 "sha256", 00:14:58.811 "sha384", 00:14:58.811 "sha512" 00:14:58.811 ], 00:14:58.811 "dhchap_dhgroups": [ 00:14:58.811 "null", 00:14:58.811 "ffdhe2048", 00:14:58.811 "ffdhe3072", 00:14:58.811 "ffdhe4096", 00:14:58.811 "ffdhe6144", 00:14:58.811 "ffdhe8192" 00:14:58.811 ] 00:14:58.811 } 00:14:58.811 }, 00:14:58.811 { 00:14:58.811 "method": "bdev_nvme_set_hotplug", 00:14:58.811 "params": { 00:14:58.811 "period_us": 100000, 00:14:58.811 "enable": false 00:14:58.811 } 00:14:58.811 }, 00:14:58.811 { 00:14:58.811 "method": "bdev_malloc_create", 00:14:58.811 "params": { 00:14:58.811 "name": "malloc0", 00:14:58.811 "num_blocks": 8192, 00:14:58.811 "block_size": 4096, 00:14:58.811 "physical_block_size": 4096, 00:14:58.811 "uuid": "04fa106b-9649-4dc1-be02-4c6ee4015b89", 00:14:58.811 "optimal_io_boundary": 0 00:14:58.811 } 00:14:58.811 }, 00:14:58.811 { 00:14:58.811 "method": "bdev_wait_for_examine" 00:14:58.811 } 00:14:58.811 ] 00:14:58.812 }, 00:14:58.812 { 00:14:58.812 "subsystem": "nbd", 00:14:58.812 "config": [] 00:14:58.812 }, 00:14:58.812 { 00:14:58.812 "subsystem": "scheduler", 00:14:58.812 "config": [ 00:14:58.812 { 00:14:58.812 "method": "framework_set_scheduler", 00:14:58.812 "params": { 00:14:58.812 "name": "static" 00:14:58.812 } 00:14:58.812 } 00:14:58.812 ] 00:14:58.812 }, 00:14:58.812 { 00:14:58.812 "subsystem": "nvmf", 00:14:58.812 "config": [ 00:14:58.812 { 00:14:58.812 "method": "nvmf_set_config", 00:14:58.812 "params": { 00:14:58.812 "discovery_filter": "match_any", 00:14:58.812 "admin_cmd_passthru": { 00:14:58.812 "identify_ctrlr": false 00:14:58.812 } 00:14:58.812 } 00:14:58.812 }, 00:14:58.812 { 00:14:58.812 "method": "nvmf_set_max_subsystems", 00:14:58.812 "params": { 00:14:58.812 "max_subsystems": 1024 00:14:58.812 } 00:14:58.812 }, 00:14:58.812 { 00:14:58.812 "method": "nvmf_set_crdt", 00:14:58.812 "params": { 00:14:58.812 "crdt1": 0, 00:14:58.812 "crdt2": 0, 00:14:58.812 "crdt3": 0 00:14:58.812 } 00:14:58.812 }, 00:14:58.812 { 00:14:58.812 "method": "nvmf_create_transport", 00:14:58.812 "params": { 00:14:58.812 "trtype": "TCP", 00:14:58.812 "max_queue_depth": 128, 00:14:58.812 "max_io_qpairs_per_ctrlr": 127, 00:14:58.812 "in_capsule_data_size": 4096, 00:14:58.812 "max_io_size": 131072, 00:14:58.812 "io_unit_size": 131072, 00:14:58.812 "max_aq_depth": 128, 00:14:58.812 "num_shared_buffers": 511, 00:14:58.812 "buf_cache_size": 4294967295, 00:14:58.812 "dif_insert_or_strip": false, 00:14:58.812 "zcopy": false, 00:14:58.812 "c2h_success": false, 00:14:58.812 "sock_priority": 0, 00:14:58.812 "abort_timeout_sec": 1, 00:14:58.812 "ack_timeout": 0, 00:14:58.812 "data_wr_pool_size": 0 00:14:58.812 } 00:14:58.812 }, 00:14:58.812 { 00:14:58.812 "method": "nvmf_create_subsystem", 00:14:58.812 "params": { 00:14:58.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:58.812 "allow_any_host": false, 00:14:58.812 "serial_number": "00000000000000000000", 00:14:58.812 "model_number": "SPDK bdev Controller", 00:14:58.812 "max_namespaces": 32, 00:14:58.812 "min_cntlid": 1, 00:14:58.812 "max_cntlid": 65519, 00:14:58.812 "ana_reporting": false 00:14:58.812 } 00:14:58.812 }, 00:14:58.812 { 00:14:58.812 "method": "nvmf_subsystem_add_host", 00:14:58.812 "params": { 00:14:58.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:58.812 "host": "nqn.2016-06.io.spdk:host1", 00:14:58.812 "psk": "key0" 00:14:58.812 } 00:14:58.812 }, 00:14:58.812 { 00:14:58.812 "method": "nvmf_subsystem_add_ns", 00:14:58.812 "params": { 00:14:58.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:58.812 "namespace": { 00:14:58.812 "nsid": 1, 00:14:58.812 "bdev_name": "malloc0", 00:14:58.812 "nguid": "04FA106B96494DC1BE024C6EE4015B89", 00:14:58.812 "uuid": "04fa106b-9649-4dc1-be02-4c6ee4015b89", 00:14:58.812 "no_auto_visible": false 00:14:58.812 } 00:14:58.812 } 00:14:58.812 }, 00:14:58.812 { 00:14:58.812 "method": "nvmf_subsystem_add_listener", 00:14:58.812 "params": { 00:14:58.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:58.812 "listen_address": { 00:14:58.812 "trtype": "TCP", 00:14:58.812 "adrfam": "IPv4", 00:14:58.812 "traddr": "10.0.0.2", 00:14:58.812 "trsvcid": "4420" 00:14:58.812 }, 00:14:58.812 "secure_channel": true 00:14:58.812 } 00:14:58.812 } 00:14:58.812 ] 00:14:58.812 } 00:14:58.812 ] 00:14:58.812 }' 00:14:58.812 08:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:59.072 08:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:14:59.072 "subsystems": [ 00:14:59.072 { 00:14:59.072 "subsystem": "keyring", 00:14:59.072 "config": [ 00:14:59.072 { 00:14:59.072 "method": "keyring_file_add_key", 00:14:59.072 "params": { 00:14:59.072 "name": "key0", 00:14:59.072 "path": "/tmp/tmp.URGuEZqjnl" 00:14:59.072 } 00:14:59.072 } 00:14:59.072 ] 00:14:59.072 }, 00:14:59.072 { 00:14:59.072 "subsystem": "iobuf", 00:14:59.072 "config": [ 00:14:59.072 { 00:14:59.072 "method": "iobuf_set_options", 00:14:59.073 "params": { 00:14:59.073 "small_pool_count": 8192, 00:14:59.073 "large_pool_count": 1024, 00:14:59.073 "small_bufsize": 8192, 00:14:59.073 "large_bufsize": 135168 00:14:59.073 } 00:14:59.073 } 00:14:59.073 ] 00:14:59.073 }, 00:14:59.073 { 00:14:59.073 "subsystem": "sock", 00:14:59.073 "config": [ 00:14:59.073 { 00:14:59.073 "method": "sock_set_default_impl", 00:14:59.073 "params": { 00:14:59.073 "impl_name": "uring" 00:14:59.073 } 00:14:59.073 }, 00:14:59.073 { 00:14:59.073 "method": "sock_impl_set_options", 00:14:59.073 "params": { 00:14:59.073 "impl_name": "ssl", 00:14:59.073 "recv_buf_size": 4096, 00:14:59.073 "send_buf_size": 4096, 00:14:59.073 "enable_recv_pipe": true, 00:14:59.073 "enable_quickack": false, 00:14:59.073 "enable_placement_id": 0, 00:14:59.073 "enable_zerocopy_send_server": true, 00:14:59.073 "enable_zerocopy_send_client": false, 00:14:59.073 "zerocopy_threshold": 0, 00:14:59.073 "tls_version": 0, 00:14:59.073 "enable_ktls": false 00:14:59.073 } 00:14:59.073 }, 00:14:59.073 { 00:14:59.073 "method": "sock_impl_set_options", 00:14:59.073 "params": { 00:14:59.073 "impl_name": "posix", 00:14:59.073 "recv_buf_size": 2097152, 00:14:59.073 "send_buf_size": 2097152, 00:14:59.073 "enable_recv_pipe": true, 00:14:59.073 "enable_quickack": false, 00:14:59.073 "enable_placement_id": 0, 00:14:59.073 "enable_zerocopy_send_server": true, 00:14:59.073 "enable_zerocopy_send_client": false, 00:14:59.073 "zerocopy_threshold": 0, 00:14:59.073 "tls_version": 0, 00:14:59.073 "enable_ktls": false 00:14:59.073 } 00:14:59.073 }, 00:14:59.073 { 00:14:59.073 "method": "sock_impl_set_options", 00:14:59.073 "params": { 00:14:59.073 "impl_name": "uring", 00:14:59.073 "recv_buf_size": 2097152, 00:14:59.073 "send_buf_size": 2097152, 00:14:59.073 "enable_recv_pipe": true, 00:14:59.073 "enable_quickack": false, 00:14:59.073 "enable_placement_id": 0, 00:14:59.073 "enable_zerocopy_send_server": false, 00:14:59.073 "enable_zerocopy_send_client": false, 00:14:59.073 "zerocopy_threshold": 0, 00:14:59.073 "tls_version": 0, 00:14:59.073 "enable_ktls": false 00:14:59.073 } 00:14:59.073 } 00:14:59.073 ] 00:14:59.073 }, 00:14:59.073 { 00:14:59.073 "subsystem": "vmd", 00:14:59.073 "config": [] 00:14:59.073 }, 00:14:59.073 { 00:14:59.073 "subsystem": "accel", 00:14:59.073 "config": [ 00:14:59.073 { 00:14:59.073 "method": "accel_set_options", 00:14:59.073 "params": { 00:14:59.073 "small_cache_size": 128, 00:14:59.073 "large_cache_size": 16, 00:14:59.073 "task_count": 2048, 00:14:59.073 "sequence_count": 2048, 00:14:59.073 "buf_count": 2048 00:14:59.073 } 00:14:59.073 } 00:14:59.073 ] 00:14:59.073 }, 00:14:59.073 { 00:14:59.073 "subsystem": "bdev", 00:14:59.073 "config": [ 00:14:59.073 { 00:14:59.073 "method": "bdev_set_options", 00:14:59.073 "params": { 00:14:59.073 "bdev_io_pool_size": 65535, 00:14:59.073 "bdev_io_cache_size": 256, 00:14:59.073 "bdev_auto_examine": true, 00:14:59.073 "iobuf_small_cache_size": 128, 00:14:59.073 "iobuf_large_cache_size": 16 00:14:59.073 } 00:14:59.073 }, 00:14:59.073 { 00:14:59.073 "method": "bdev_raid_set_options", 00:14:59.073 "params": { 00:14:59.073 "process_window_size_kb": 1024 00:14:59.073 } 00:14:59.073 }, 00:14:59.073 { 00:14:59.073 "method": "bdev_iscsi_set_options", 00:14:59.073 "params": { 00:14:59.073 "timeout_sec": 30 00:14:59.073 } 00:14:59.073 }, 00:14:59.073 { 00:14:59.073 "method": "bdev_nvme_set_options", 00:14:59.073 "params": { 00:14:59.073 "action_on_timeout": "none", 00:14:59.073 "timeout_us": 0, 00:14:59.073 "timeout_admin_us": 0, 00:14:59.073 "keep_alive_timeout_ms": 10000, 00:14:59.073 "arbitration_burst": 0, 00:14:59.073 "low_priority_weight": 0, 00:14:59.073 "medium_priority_weight": 0, 00:14:59.073 "high_priority_weight": 0, 00:14:59.073 "nvme_adminq_poll_period_us": 10000, 00:14:59.073 "nvme_ioq_poll_period_us": 0, 00:14:59.073 "io_queue_requests": 512, 00:14:59.074 "delay_cmd_submit": true, 00:14:59.074 "transport_retry_count": 4, 00:14:59.074 "bdev_retry_count": 3, 00:14:59.074 "transport_ack_timeout": 0, 00:14:59.074 "ctrlr_loss_timeout_sec": 0, 00:14:59.074 "reconnect_delay_sec": 0, 00:14:59.074 "fast_io_fail_timeout_sec": 0, 00:14:59.074 "disable_auto_failback": false, 00:14:59.074 "generate_uuids": false, 00:14:59.074 "transport_tos": 0, 00:14:59.074 "nvme_error_stat": false, 00:14:59.074 "rdma_srq_size": 0, 00:14:59.074 "io_path_stat": false, 00:14:59.074 "allow_accel_sequence": false, 00:14:59.074 "rdma_max_cq_size": 0, 00:14:59.074 "rdma_cm_event_timeout_ms": 0, 00:14:59.074 "dhchap_digests": [ 00:14:59.074 "sha256", 00:14:59.074 "sha384", 00:14:59.074 "sha512" 00:14:59.074 ], 00:14:59.074 "dhchap_dhgroups": [ 00:14:59.074 "null", 00:14:59.074 "ffdhe2048", 00:14:59.074 "ffdhe3072", 00:14:59.074 "ffdhe4096", 00:14:59.074 "ffdhe6144", 00:14:59.074 "ffdhe8192" 00:14:59.074 ] 00:14:59.074 } 00:14:59.074 }, 00:14:59.074 { 00:14:59.074 "method": "bdev_nvme_attach_controller", 00:14:59.074 "params": { 00:14:59.074 "name": "nvme0", 00:14:59.074 "trtype": "TCP", 00:14:59.074 "adrfam": "IPv4", 00:14:59.074 "traddr": "10.0.0.2", 00:14:59.074 "trsvcid": "4420", 00:14:59.074 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:59.074 "prchk_reftag": false, 00:14:59.074 "prchk_guard": false, 00:14:59.074 "ctrlr_loss_timeout_sec": 0, 00:14:59.074 "reconnect_delay_sec": 0, 00:14:59.074 "fast_io_fail_timeout_sec": 0, 00:14:59.074 "psk": "key0", 00:14:59.074 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:59.074 "hdgst": false, 00:14:59.074 "ddgst": false 00:14:59.074 } 00:14:59.074 }, 00:14:59.074 { 00:14:59.074 "method": "bdev_nvme_set_hotplug", 00:14:59.074 "params": { 00:14:59.074 "period_us": 100000, 00:14:59.074 "enable": false 00:14:59.074 } 00:14:59.074 }, 00:14:59.074 { 00:14:59.074 "method": "bdev_enable_histogram", 00:14:59.074 "params": { 00:14:59.074 "name": "nvme0n1", 00:14:59.074 "enable": true 00:14:59.074 } 00:14:59.074 }, 00:14:59.074 { 00:14:59.074 "method": "bdev_wait_for_examine" 00:14:59.074 } 00:14:59.074 ] 00:14:59.074 }, 00:14:59.074 { 00:14:59.074 "subsystem": "nbd", 00:14:59.074 "config": [] 00:14:59.074 } 00:14:59.074 ] 00:14:59.074 }' 00:14:59.074 08:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 74044 00:14:59.074 08:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 74044 ']' 00:14:59.074 08:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 74044 00:14:59.074 08:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:14:59.074 08:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:59.074 08:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 74044 00:14:59.074 08:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:59.074 08:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:59.074 killing process with pid 74044 00:14:59.074 08:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 74044' 00:14:59.074 08:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 74044 00:14:59.074 Received shutdown signal, test time was about 1.000000 seconds 00:14:59.074 00:14:59.074 Latency(us) 00:14:59.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.074 =================================================================================================================== 00:14:59.074 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:59.074 08:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 74044 00:14:59.334 08:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 74012 00:14:59.334 08:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 74012 ']' 00:14:59.334 08:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 74012 00:14:59.334 08:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:14:59.334 08:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:59.334 08:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 74012 00:14:59.334 08:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:59.334 08:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:59.334 killing process with pid 74012 00:14:59.334 08:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 74012' 00:14:59.334 08:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 74012 00:14:59.334 08:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 74012 00:14:59.593 08:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:14:59.593 08:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:59.593 08:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:14:59.594 "subsystems": [ 00:14:59.594 { 00:14:59.594 "subsystem": "keyring", 00:14:59.594 "config": [ 00:14:59.594 { 00:14:59.594 "method": "keyring_file_add_key", 00:14:59.594 "params": { 00:14:59.594 "name": "key0", 00:14:59.594 "path": "/tmp/tmp.URGuEZqjnl" 00:14:59.594 } 00:14:59.594 } 00:14:59.594 ] 00:14:59.594 }, 00:14:59.594 { 00:14:59.594 "subsystem": "iobuf", 00:14:59.594 "config": [ 00:14:59.594 { 00:14:59.594 "method": "iobuf_set_options", 00:14:59.594 "params": { 00:14:59.594 "small_pool_count": 8192, 00:14:59.594 "large_pool_count": 1024, 00:14:59.594 "small_bufsize": 8192, 00:14:59.594 "large_bufsize": 135168 00:14:59.594 } 00:14:59.594 } 00:14:59.594 ] 00:14:59.594 }, 00:14:59.594 { 00:14:59.594 "subsystem": "sock", 00:14:59.594 "config": [ 00:14:59.594 { 00:14:59.594 "method": "sock_set_default_impl", 00:14:59.594 "params": { 00:14:59.594 "impl_name": "uring" 00:14:59.594 } 00:14:59.594 }, 00:14:59.594 { 00:14:59.594 "method": "sock_impl_set_options", 00:14:59.594 "params": { 00:14:59.594 "impl_name": "ssl", 00:14:59.594 "recv_buf_size": 4096, 00:14:59.594 "send_buf_size": 4096, 00:14:59.594 "enable_recv_pipe": true, 00:14:59.594 "enable_quickack": false, 00:14:59.594 "enable_placement_id": 0, 00:14:59.594 "enable_zerocopy_send_server": true, 00:14:59.594 "enable_zerocopy_send_client": false, 00:14:59.594 "zerocopy_threshold": 0, 00:14:59.594 "tls_version": 0, 00:14:59.594 "enable_ktls": false 00:14:59.594 } 00:14:59.594 }, 00:14:59.594 { 00:14:59.594 "method": "sock_impl_set_options", 00:14:59.594 "params": { 00:14:59.594 "impl_name": "posix", 00:14:59.594 "recv_buf_size": 2097152, 00:14:59.594 "send_buf_size": 2097152, 00:14:59.594 "enable_recv_pipe": true, 00:14:59.594 "enable_quickack": false, 00:14:59.594 "enable_placement_id": 0, 00:14:59.594 "enable_zerocopy_send_server": true, 00:14:59.594 "enable_zerocopy_send_client": false, 00:14:59.594 "zerocopy_threshold": 0, 00:14:59.594 "tls_version": 0, 00:14:59.594 "enable_ktls": false 00:14:59.594 } 00:14:59.594 }, 00:14:59.594 { 00:14:59.594 "method": "sock_impl_set_options", 00:14:59.594 "params": { 00:14:59.594 "impl_name": "uring", 00:14:59.594 "recv_buf_size": 2097152, 00:14:59.594 "send_buf_size": 2097152, 00:14:59.594 "enable_recv_pipe": true, 00:14:59.594 "enable_quickack": false, 00:14:59.594 "enable_placement_id": 0, 00:14:59.594 "enable_zerocopy_send_server": false, 00:14:59.594 "enable_zerocopy_send_client": false, 00:14:59.594 "zerocopy_threshold": 0, 00:14:59.594 "tls_version": 0, 00:14:59.594 "enable_ktls": false 00:14:59.594 } 00:14:59.594 } 00:14:59.594 ] 00:14:59.594 }, 00:14:59.594 { 00:14:59.594 "subsystem": "vmd", 00:14:59.594 "config": [] 00:14:59.594 }, 00:14:59.594 { 00:14:59.594 "subsystem": "accel", 00:14:59.594 "config": [ 00:14:59.594 { 00:14:59.594 "method": "accel_set_options", 00:14:59.594 "params": { 00:14:59.594 "small_cache_size": 128, 00:14:59.594 "large_cache_size": 16, 00:14:59.594 "task_count": 2048, 00:14:59.594 "sequence_count": 2048, 00:14:59.594 "buf_count": 2048 00:14:59.594 } 00:14:59.594 } 00:14:59.594 ] 00:14:59.594 }, 00:14:59.594 { 00:14:59.594 "subsystem": "bdev", 00:14:59.594 "config": [ 00:14:59.594 { 00:14:59.594 "method": "bdev_set_options", 00:14:59.594 "params": { 00:14:59.594 "bdev_io_pool_size": 65535, 00:14:59.594 "bdev_io_cache_size": 256, 00:14:59.594 "bdev_auto_examine": true, 00:14:59.594 "iobuf_small_cache_size": 128, 00:14:59.594 "iobuf_large_cache_size": 16 00:14:59.594 } 00:14:59.594 }, 00:14:59.594 { 00:14:59.594 "method": "bdev_raid_set_options", 00:14:59.594 "params": { 00:14:59.594 "process_window_size_kb": 1024 00:14:59.594 } 00:14:59.594 }, 00:14:59.594 { 00:14:59.594 "method": "bdev_iscsi_set_options", 00:14:59.594 "params": { 00:14:59.594 "timeout_sec": 30 00:14:59.594 } 00:14:59.594 }, 00:14:59.594 { 00:14:59.594 "method": "bdev_nvme_set_options", 00:14:59.594 "params": { 00:14:59.594 "action_on_timeout": "none", 00:14:59.594 "timeout_us": 0, 00:14:59.594 "timeout_admin_us": 0, 00:14:59.594 "keep_alive_timeout_ms": 10000, 00:14:59.594 "arbitration_burst": 0, 00:14:59.594 "low_priority_weight": 0, 00:14:59.594 "medium_priority_weight": 0, 00:14:59.594 "high_priority_weight": 0, 00:14:59.594 "nvme_adminq_poll_period_us": 10000, 00:14:59.594 "nvme_ioq_poll_period_us": 0, 00:14:59.594 "io_queue_requests": 0, 00:14:59.594 "delay_cmd_submit": true, 00:14:59.594 "transport_retry_count": 4, 00:14:59.594 "bdev_retry_count": 3, 00:14:59.594 "transport_ack_timeout": 0, 00:14:59.594 "ctrlr_loss_timeout_sec": 0, 00:14:59.594 "reconnect_delay_sec": 0, 00:14:59.594 "fast_io_fail_timeout_sec": 0, 00:14:59.594 "disable_auto_failback": false, 00:14:59.594 "generate_uuids": false, 00:14:59.594 "transport_tos": 0, 00:14:59.594 "nvme_error_stat": false, 00:14:59.594 "rdma_srq_size": 0, 00:14:59.594 "io_path_stat": false, 00:14:59.594 "allow_accel_sequence": false, 00:14:59.594 "rdma_max_cq_size": 0, 00:14:59.594 "rdma_cm_event_timeout_ms": 0, 00:14:59.594 "dhchap_digests": [ 00:14:59.594 "sha256", 00:14:59.594 "sha384", 00:14:59.594 "sha512" 00:14:59.594 ], 00:14:59.594 "dhchap_dhgroups": [ 00:14:59.594 "null", 00:14:59.594 "ffdhe2048", 00:14:59.594 "ffdhe3072", 00:14:59.594 "ffdhe4096", 00:14:59.594 "ffdhe6144", 00:14:59.594 "ffdhe8192" 00:14:59.594 ] 00:14:59.594 } 00:14:59.594 }, 00:14:59.594 { 00:14:59.594 "method": "bdev_nvme_set_hotplug", 00:14:59.594 "params": { 00:14:59.594 "period_us": 100000, 00:14:59.594 "enable": false 00:14:59.594 } 00:14:59.594 }, 00:14:59.594 { 00:14:59.594 "method": "bdev_malloc_create", 00:14:59.594 "params": { 00:14:59.594 "name": "malloc0", 00:14:59.594 "num_blocks": 8192, 00:14:59.594 "block_size": 4096, 00:14:59.594 "physical_block_size": 4096, 00:14:59.594 "uuid": "04fa106b-9649-4dc1-be02-4c6ee4015b89", 00:14:59.594 "optimal_io_boundary": 0 00:14:59.594 } 00:14:59.594 }, 00:14:59.594 { 00:14:59.594 "method": "bdev_wait_for_examine" 00:14:59.594 } 00:14:59.594 ] 00:14:59.594 }, 00:14:59.594 { 00:14:59.594 "subsystem": "nbd", 00:14:59.594 "config": [] 00:14:59.594 }, 00:14:59.594 { 00:14:59.594 "subsystem": "scheduler", 00:14:59.594 "config": [ 00:14:59.594 { 00:14:59.594 "method": "framework_set_scheduler", 00:14:59.594 "params": { 00:14:59.594 "name": "static" 00:14:59.594 } 00:14:59.594 } 00:14:59.594 ] 00:14:59.594 }, 00:14:59.594 { 00:14:59.594 "subsystem": "nvmf", 00:14:59.594 "config": [ 00:14:59.594 { 00:14:59.595 "method": "nvmf_set_config", 00:14:59.595 "params": { 00:14:59.595 "discovery_filter": "match_any", 00:14:59.595 "admin_cmd_passthru": { 00:14:59.595 "identify_ctrlr": false 00:14:59.595 } 00:14:59.595 } 00:14:59.595 }, 00:14:59.595 { 00:14:59.595 "method": "nvmf_set_max_subsystems", 00:14:59.595 "params": { 00:14:59.595 "max_subsystems": 1024 00:14:59.595 } 00:14:59.595 }, 00:14:59.595 { 00:14:59.595 "method": "nvmf_set_crdt", 00:14:59.595 "params": { 00:14:59.595 "crdt1": 0, 00:14:59.595 "crdt2": 0, 00:14:59.595 "crdt3": 0 00:14:59.595 } 00:14:59.595 }, 00:14:59.595 { 00:14:59.595 "method": "nvmf_create_transport", 00:14:59.595 "params": { 00:14:59.595 "trtype": "TCP", 00:14:59.595 "max_queue_depth": 128, 00:14:59.595 "max_io_qpairs_per_ctrlr": 127, 00:14:59.595 "in_capsule_data_size": 4096, 00:14:59.595 "max_io_size": 131072, 00:14:59.595 "io_unit_size": 131072, 00:14:59.595 "max_aq_depth": 128, 00:14:59.595 "num_shared_buffers": 511, 00:14:59.595 "buf_cache_size": 4294967295, 00:14:59.595 "dif_insert_or_strip": false, 00:14:59.595 "zcopy": false, 00:14:59.595 "c2h_success": false, 00:14:59.595 "sock_priority": 0, 00:14:59.595 "abort_timeout_sec": 1, 00:14:59.595 "ack_timeout": 0, 00:14:59.595 "data_wr_pool_size": 0 00:14:59.595 } 00:14:59.595 }, 00:14:59.595 { 00:14:59.595 "method": "nvmf_create_subsystem", 00:14:59.595 "params": { 00:14:59.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:59.595 "allow_any_host": false, 00:14:59.595 "serial_number": "00000000000000000000", 00:14:59.595 "model_number": "SPDK bdev Controller", 00:14:59.595 "max_namespaces": 32, 00:14:59.595 "min_cntlid": 1, 00:14:59.595 "max_cntlid": 65519, 00:14:59.595 "ana_reporting": false 00:14:59.595 } 00:14:59.595 }, 00:14:59.595 { 00:14:59.595 "method": "nvmf_subsystem_add_host", 00:14:59.595 "params": { 00:14:59.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:59.595 "host": "nqn.2016-06.io.spdk:host1", 00:14:59.595 "psk": "key0" 00:14:59.595 } 00:14:59.595 }, 00:14:59.595 { 00:14:59.595 "method": "nvmf_subsystem_add_ns", 00:14:59.595 "params": { 00:14:59.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:59.595 "namespace": { 00:14:59.595 "nsid": 1, 00:14:59.595 "bdev_name": "malloc0", 00:14:59.595 "nguid": "04FA106B96494DC1BE024C6EE4015B89", 00:14:59.595 "uuid": "04fa106b-9649-4dc1-be02-4c6ee4015b89", 00:14:59.595 "no_auto_visible": false 00:14:59.595 } 00:14:59.595 } 00:14:59.595 }, 00:14:59.595 { 00:14:59.595 "method": "nvmf_subsystem_add_listener", 00:14:59.595 "params": { 00:14:59.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:59.595 "listen_address": { 00:14:59.595 "trtype": "TCP", 00:14:59.595 "adrfam": "IPv4", 00:14:59.595 "traddr": "10.0.0.2", 00:14:59.595 "trsvcid": "4420" 00:14:59.595 }, 00:14:59.595 "secure_channel": true 00:14:59.595 } 00:14:59.595 } 00:14:59.595 ] 00:14:59.595 } 00:14:59.595 ] 00:14:59.595 }' 00:14:59.595 08:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:59.595 08:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:59.595 08:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74105 00:14:59.595 08:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:59.595 08:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74105 00:14:59.595 08:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 74105 ']' 00:14:59.595 08:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.595 08:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:59.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.595 08:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.595 08:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:59.595 08:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:59.595 [2024-06-10 08:10:21.438994] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:14:59.595 [2024-06-10 08:10:21.439103] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.854 [2024-06-10 08:10:21.568970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.854 [2024-06-10 08:10:21.677219] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.854 [2024-06-10 08:10:21.677283] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.854 [2024-06-10 08:10:21.677296] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.854 [2024-06-10 08:10:21.677304] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.854 [2024-06-10 08:10:21.677310] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.854 [2024-06-10 08:10:21.677420] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.113 [2024-06-10 08:10:21.858181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:00.113 [2024-06-10 08:10:21.942602] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.114 [2024-06-10 08:10:21.974553] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:00.114 [2024-06-10 08:10:21.974753] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.684 08:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:00.684 08:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:15:00.684 08:10:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:00.684 08:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:00.684 08:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:00.684 08:10:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:00.684 08:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=74137 00:15:00.684 08:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 74137 /var/tmp/bdevperf.sock 00:15:00.684 08:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 74137 ']' 00:15:00.684 08:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:00.684 08:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:00.684 08:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:00.684 08:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:00.684 08:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:00.684 08:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:15:00.684 "subsystems": [ 00:15:00.684 { 00:15:00.684 "subsystem": "keyring", 00:15:00.684 "config": [ 00:15:00.684 { 00:15:00.684 "method": "keyring_file_add_key", 00:15:00.684 "params": { 00:15:00.684 "name": "key0", 00:15:00.684 "path": "/tmp/tmp.URGuEZqjnl" 00:15:00.684 } 00:15:00.684 } 00:15:00.684 ] 00:15:00.684 }, 00:15:00.684 { 00:15:00.684 "subsystem": "iobuf", 00:15:00.684 "config": [ 00:15:00.684 { 00:15:00.684 "method": "iobuf_set_options", 00:15:00.684 "params": { 00:15:00.684 "small_pool_count": 8192, 00:15:00.684 "large_pool_count": 1024, 00:15:00.684 "small_bufsize": 8192, 00:15:00.684 "large_bufsize": 135168 00:15:00.684 } 00:15:00.684 } 00:15:00.684 ] 00:15:00.684 }, 00:15:00.684 { 00:15:00.684 "subsystem": "sock", 00:15:00.684 "config": [ 00:15:00.684 { 00:15:00.684 "method": "sock_set_default_impl", 00:15:00.684 "params": { 00:15:00.685 "impl_name": "uring" 00:15:00.685 } 00:15:00.685 }, 00:15:00.685 { 00:15:00.685 "method": "sock_impl_set_options", 00:15:00.685 "params": { 00:15:00.685 "impl_name": "ssl", 00:15:00.685 "recv_buf_size": 4096, 00:15:00.685 "send_buf_size": 4096, 00:15:00.685 "enable_recv_pipe": true, 00:15:00.685 "enable_quickack": false, 00:15:00.685 "enable_placement_id": 0, 00:15:00.685 "enable_zerocopy_send_server": true, 00:15:00.685 "enable_zerocopy_send_client": false, 00:15:00.685 "zerocopy_threshold": 0, 00:15:00.685 "tls_version": 0, 00:15:00.685 "enable_ktls": false 00:15:00.685 } 00:15:00.685 }, 00:15:00.685 { 00:15:00.685 "method": "sock_impl_set_options", 00:15:00.685 "params": { 00:15:00.685 "impl_name": "posix", 00:15:00.685 "recv_buf_size": 2097152, 00:15:00.685 "send_buf_size": 2097152, 00:15:00.685 "enable_recv_pipe": true, 00:15:00.685 "enable_quickack": false, 00:15:00.685 "enable_placement_id": 0, 00:15:00.685 "enable_zerocopy_send_server": true, 00:15:00.685 "enable_zerocopy_send_client": false, 00:15:00.685 "zerocopy_threshold": 0, 00:15:00.685 "tls_version": 0, 00:15:00.685 "enable_ktls": false 00:15:00.685 } 00:15:00.685 }, 00:15:00.685 { 00:15:00.685 "method": "sock_impl_set_options", 00:15:00.685 "params": { 00:15:00.685 "impl_name": "uring", 00:15:00.685 "recv_buf_size": 2097152, 00:15:00.685 "send_buf_size": 2097152, 00:15:00.685 "enable_recv_pipe": true, 00:15:00.685 "enable_quickack": false, 00:15:00.685 "enable_placement_id": 0, 00:15:00.685 "enable_zerocopy_send_server": false, 00:15:00.685 "enable_zerocopy_send_client": false, 00:15:00.685 "zerocopy_threshold": 0, 00:15:00.685 "tls_version": 0, 00:15:00.685 "enable_ktls": false 00:15:00.685 } 00:15:00.685 } 00:15:00.685 ] 00:15:00.685 }, 00:15:00.685 { 00:15:00.685 "subsystem": "vmd", 00:15:00.685 "config": [] 00:15:00.685 }, 00:15:00.685 { 00:15:00.685 "subsystem": "accel", 00:15:00.685 "config": [ 00:15:00.685 { 00:15:00.685 "method": "accel_set_options", 00:15:00.685 "params": { 00:15:00.685 "small_cache_size": 128, 00:15:00.685 "large_cache_size": 16, 00:15:00.685 "task_count": 2048, 00:15:00.685 "sequence_count": 2048, 00:15:00.685 "buf_count": 2048 00:15:00.685 } 00:15:00.685 } 00:15:00.685 ] 00:15:00.685 }, 00:15:00.685 { 00:15:00.685 "subsystem": "bdev", 00:15:00.685 "config": [ 00:15:00.685 { 00:15:00.685 "method": "bdev_set_options", 00:15:00.685 "params": { 00:15:00.685 "bdev_io_pool_size": 65535, 00:15:00.685 "bdev_io_cache_size": 256, 00:15:00.685 "bdev_auto_examine": true, 00:15:00.685 "iobuf_small_cache_size": 128, 00:15:00.685 "iobuf_large_cache_size": 16 00:15:00.685 } 00:15:00.685 }, 00:15:00.685 { 00:15:00.685 "method": "bdev_raid_set_options", 00:15:00.685 "params": { 00:15:00.685 "process_window_size_kb": 1024 00:15:00.685 } 00:15:00.685 }, 00:15:00.685 { 00:15:00.685 "method": "bdev_iscsi_set_options", 00:15:00.685 "params": { 00:15:00.685 "timeout_sec": 30 00:15:00.685 } 00:15:00.685 }, 00:15:00.685 { 00:15:00.685 "method": "bdev_nvme_set_options", 00:15:00.685 "params": { 00:15:00.685 "action_on_timeout": "none", 00:15:00.685 "timeout_us": 0, 00:15:00.685 "timeout_admin_us": 0, 00:15:00.685 "keep_alive_timeout_ms": 10000, 00:15:00.685 "arbitration_burst": 0, 00:15:00.685 "low_priority_weight": 0, 00:15:00.685 "medium_priority_weight": 0, 00:15:00.685 "high_priority_weight": 0, 00:15:00.685 "nvme_adminq_poll_period_us": 10000, 00:15:00.685 "nvme_ioq_poll_period_us": 0, 00:15:00.685 "io_queue_requests": 512, 00:15:00.685 "delay_cmd_submit": true, 00:15:00.685 "transport_retry_count": 4, 00:15:00.685 "bdev_retry_count": 3, 00:15:00.685 "transport_ack_timeout": 0, 00:15:00.685 "ctrlr_loss_timeout_sec": 0, 00:15:00.685 "reconnect_delay_sec": 0, 00:15:00.685 "fast_io_fail_timeout_sec": 0, 00:15:00.685 "disable_auto_failback": false, 00:15:00.685 "generate_uuids": false, 00:15:00.685 "transport_tos": 0, 00:15:00.685 "nvme_error_stat": false, 00:15:00.685 "rdma_srq_size": 0, 00:15:00.685 "io_path_stat": false, 00:15:00.685 "allow_accel_sequence": false, 00:15:00.685 "rdma_max_cq_size": 0, 00:15:00.685 "rdma_cm_event_timeout_ms": 0, 00:15:00.685 "dhchap_digests": [ 00:15:00.685 "sha256", 00:15:00.685 "sha384", 00:15:00.685 "sha512" 00:15:00.685 ], 00:15:00.685 "dhchap_dhgroups": [ 00:15:00.685 "null", 00:15:00.685 "ffdhe2048", 00:15:00.685 "ffdhe3072", 00:15:00.685 "ffdhe4096", 00:15:00.685 "ffdhe6144", 00:15:00.685 "ffdhe8192" 00:15:00.685 ] 00:15:00.685 } 00:15:00.685 }, 00:15:00.685 { 00:15:00.685 "method": "bdev_nvme_attach_controller", 00:15:00.685 "params": { 00:15:00.685 "name": "nvme0", 00:15:00.685 "trtype": "TCP", 00:15:00.685 "adrfam": "IPv4", 00:15:00.685 "traddr": "10.0.0.2", 00:15:00.685 "trsvcid": "4420", 00:15:00.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.685 "prchk_reftag": false, 00:15:00.685 "prchk_guard": false, 00:15:00.685 "ctrlr_loss_timeout_sec": 0, 00:15:00.685 "reconnect_delay_sec": 0, 00:15:00.685 "fast_io_fail_timeout_sec": 0, 00:15:00.685 "psk": "key0", 00:15:00.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:00.685 "hdgst": false, 00:15:00.685 "ddgst": false 00:15:00.685 } 00:15:00.685 }, 00:15:00.685 { 00:15:00.685 "method": "bdev_nvme_set_hotplug", 00:15:00.685 "params": { 00:15:00.685 "period_us": 100000, 00:15:00.685 "enable": false 00:15:00.685 } 00:15:00.685 }, 00:15:00.685 { 00:15:00.685 "method": "bdev_enable_histogram", 00:15:00.685 "params": { 00:15:00.685 "name": "nvme0n1", 00:15:00.685 "enable": true 00:15:00.685 } 00:15:00.685 }, 00:15:00.685 { 00:15:00.685 "method": "bdev_wait_for_examine" 00:15:00.685 } 00:15:00.685 ] 00:15:00.685 }, 00:15:00.685 { 00:15:00.685 "subsystem": "nbd", 00:15:00.685 "config": [] 00:15:00.685 } 00:15:00.685 ] 00:15:00.685 }' 00:15:00.685 08:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:00.685 [2024-06-10 08:10:22.502980] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:15:00.685 [2024-06-10 08:10:22.503089] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74137 ] 00:15:00.945 [2024-06-10 08:10:22.643104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.945 [2024-06-10 08:10:22.773761] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.204 [2024-06-10 08:10:22.912540] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:01.204 [2024-06-10 08:10:22.966887] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:01.772 08:10:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:01.772 08:10:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:15:01.772 08:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:01.772 08:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:15:02.031 08:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.031 08:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:02.031 Running I/O for 1 seconds... 00:15:03.408 00:15:03.408 Latency(us) 00:15:03.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.408 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:03.408 Verification LBA range: start 0x0 length 0x2000 00:15:03.408 nvme0n1 : 1.01 4371.63 17.08 0.00 0.00 29029.64 4647.10 23831.27 00:15:03.408 =================================================================================================================== 00:15:03.408 Total : 4371.63 17.08 0.00 0.00 29029.64 4647.10 23831.27 00:15:03.408 0 00:15:03.408 08:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:15:03.408 08:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:15:03.408 08:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:03.409 08:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # type=--id 00:15:03.409 08:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # id=0 00:15:03.409 08:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:15:03.409 08:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:03.409 08:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:15:03.409 08:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:15:03.409 08:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # for n in $shm_files 00:15:03.409 08:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:03.409 nvmf_trace.0 00:15:03.409 08:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@822 -- # return 0 00:15:03.409 08:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 74137 00:15:03.409 08:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 74137 ']' 00:15:03.409 08:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 74137 00:15:03.409 08:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:15:03.409 08:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:03.409 08:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 74137 00:15:03.409 08:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:03.409 08:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:03.409 killing process with pid 74137 00:15:03.409 08:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 74137' 00:15:03.409 08:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 74137 00:15:03.409 Received shutdown signal, test time was about 1.000000 seconds 00:15:03.409 00:15:03.409 Latency(us) 00:15:03.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.409 =================================================================================================================== 00:15:03.409 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:03.409 08:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 74137 00:15:03.409 08:10:25 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:03.409 08:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:03.409 08:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:15:03.409 08:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:03.409 08:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:15:03.409 08:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:03.409 08:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:03.409 rmmod nvme_tcp 00:15:03.409 rmmod nvme_fabrics 00:15:03.668 rmmod nvme_keyring 00:15:03.668 08:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:03.668 08:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:15:03.668 08:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:15:03.668 08:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 74105 ']' 00:15:03.668 08:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 74105 00:15:03.668 08:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 74105 ']' 00:15:03.668 08:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 74105 00:15:03.668 08:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:15:03.668 08:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:03.668 08:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 74105 00:15:03.668 08:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:03.668 08:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:03.668 killing process with pid 74105 00:15:03.668 08:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 74105' 00:15:03.668 08:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 74105 00:15:03.668 08:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 74105 00:15:03.927 08:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:03.927 08:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:03.927 08:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:03.927 08:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:03.927 08:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:03.927 08:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.927 08:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:03.927 08:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.927 08:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:03.927 08:10:25 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.xRygmvED4L /tmp/tmp.ObAJbBwMMt /tmp/tmp.URGuEZqjnl 00:15:03.927 00:15:03.927 real 1m26.937s 00:15:03.927 user 2m14.486s 00:15:03.927 sys 0m30.275s 00:15:03.927 ************************************ 00:15:03.927 END TEST nvmf_tls 00:15:03.927 ************************************ 00:15:03.927 08:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:03.927 08:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.927 08:10:25 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:03.927 08:10:25 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:03.927 08:10:25 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:03.927 08:10:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:03.927 ************************************ 00:15:03.927 START TEST nvmf_fips 00:15:03.927 ************************************ 00:15:03.927 08:10:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:04.187 * Looking for test storage... 00:15:04.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:04.187 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:15:04.188 08:10:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:15:04.188 Error setting digest 00:15:04.188 00F2B7C14E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:04.188 00F2B7C14E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:04.188 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:04.189 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:04.448 Cannot find device "nvmf_tgt_br" 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:04.448 Cannot find device "nvmf_tgt_br2" 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:04.448 Cannot find device "nvmf_tgt_br" 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:04.448 Cannot find device "nvmf_tgt_br2" 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:04.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:04.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:04.448 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:04.707 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:04.707 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:04.707 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:04.707 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:04.707 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:04.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:15:04.708 00:15:04.708 --- 10.0.0.2 ping statistics --- 00:15:04.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.708 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:04.708 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:04.708 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:15:04.708 00:15:04.708 --- 10.0.0.3 ping statistics --- 00:15:04.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.708 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:04.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:04.708 00:15:04.708 --- 10.0.0.1 ping statistics --- 00:15:04.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.708 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=74406 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 74406 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 74406 ']' 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:04.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:04.708 08:10:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:04.708 [2024-06-10 08:10:26.492054] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:15:04.708 [2024-06-10 08:10:26.492156] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.967 [2024-06-10 08:10:26.635365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.967 [2024-06-10 08:10:26.758159] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.967 [2024-06-10 08:10:26.758226] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.967 [2024-06-10 08:10:26.758242] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.967 [2024-06-10 08:10:26.758253] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.967 [2024-06-10 08:10:26.758263] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.967 [2024-06-10 08:10:26.758294] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.967 [2024-06-10 08:10:26.820997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:05.905 08:10:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:05.905 08:10:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:15:05.905 08:10:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:05.905 08:10:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:05.905 08:10:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:05.905 08:10:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.905 08:10:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:05.905 08:10:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:05.905 08:10:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:05.905 08:10:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:05.905 08:10:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:05.905 08:10:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:05.905 08:10:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:05.905 08:10:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.905 [2024-06-10 08:10:27.740930] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.905 [2024-06-10 08:10:27.756902] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:05.905 [2024-06-10 08:10:27.757077] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.165 [2024-06-10 08:10:27.789829] tcp.c:3707:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:06.165 malloc0 00:15:06.165 08:10:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:06.165 08:10:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=74442 00:15:06.165 08:10:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:06.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:06.165 08:10:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 74442 /var/tmp/bdevperf.sock 00:15:06.165 08:10:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 74442 ']' 00:15:06.165 08:10:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:06.165 08:10:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:06.165 08:10:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:06.165 08:10:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:06.165 08:10:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:06.165 [2024-06-10 08:10:27.904022] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:15:06.165 [2024-06-10 08:10:27.904310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74442 ] 00:15:06.425 [2024-06-10 08:10:28.044307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.425 [2024-06-10 08:10:28.164344] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.425 [2024-06-10 08:10:28.225757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:06.994 08:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:06.994 08:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:15:06.994 08:10:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:07.253 [2024-06-10 08:10:29.009857] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:07.253 [2024-06-10 08:10:29.010022] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:07.253 TLSTESTn1 00:15:07.253 08:10:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:07.513 Running I/O for 10 seconds... 00:15:17.506 00:15:17.506 Latency(us) 00:15:17.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.506 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:17.506 Verification LBA range: start 0x0 length 0x2000 00:15:17.506 TLSTESTn1 : 10.01 3949.78 15.43 0.00 0.00 32352.96 5272.67 34555.35 00:15:17.506 =================================================================================================================== 00:15:17.506 Total : 3949.78 15.43 0.00 0.00 32352.96 5272.67 34555.35 00:15:17.506 0 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # type=--id 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # id=0 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # for n in $shm_files 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:17.506 nvmf_trace.0 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@822 -- # return 0 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 74442 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 74442 ']' 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 74442 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 74442 00:15:17.506 killing process with pid 74442 00:15:17.506 Received shutdown signal, test time was about 10.000000 seconds 00:15:17.506 00:15:17.506 Latency(us) 00:15:17.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.506 =================================================================================================================== 00:15:17.506 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 74442' 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 74442 00:15:17.506 [2024-06-10 08:10:39.354005] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:17.506 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 74442 00:15:17.765 08:10:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:17.765 08:10:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:17.765 08:10:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:15:17.766 08:10:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:17.766 08:10:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:15:17.766 08:10:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:17.766 08:10:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:18.025 rmmod nvme_tcp 00:15:18.025 rmmod nvme_fabrics 00:15:18.025 rmmod nvme_keyring 00:15:18.025 08:10:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:18.025 08:10:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:15:18.025 08:10:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:15:18.025 08:10:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 74406 ']' 00:15:18.025 08:10:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 74406 00:15:18.025 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 74406 ']' 00:15:18.025 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 74406 00:15:18.025 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:15:18.025 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:18.025 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 74406 00:15:18.025 killing process with pid 74406 00:15:18.025 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:18.025 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:18.025 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 74406' 00:15:18.025 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 74406 00:15:18.025 [2024-06-10 08:10:39.700754] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:18.025 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 74406 00:15:18.284 08:10:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:18.284 08:10:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:18.284 08:10:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:18.284 08:10:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:18.284 08:10:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:18.284 08:10:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.284 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.284 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.284 08:10:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:18.284 08:10:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:18.284 ************************************ 00:15:18.284 END TEST nvmf_fips 00:15:18.284 ************************************ 00:15:18.284 00:15:18.284 real 0m14.222s 00:15:18.284 user 0m18.213s 00:15:18.284 sys 0m6.505s 00:15:18.284 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:18.284 08:10:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:18.284 08:10:40 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:15:18.284 08:10:40 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:15:18.284 08:10:40 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:15:18.284 08:10:40 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:18.284 08:10:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:18.284 08:10:40 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:15:18.284 08:10:40 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:18.284 08:10:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:18.284 08:10:40 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 1 -eq 0 ]] 00:15:18.284 08:10:40 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:18.284 08:10:40 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:18.284 08:10:40 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:18.284 08:10:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:18.284 ************************************ 00:15:18.284 START TEST nvmf_identify 00:15:18.284 ************************************ 00:15:18.284 08:10:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:18.284 * Looking for test storage... 00:15:18.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:18.284 08:10:40 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:18.284 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:18.284 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.284 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.284 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.284 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.284 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:18.284 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:18.284 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.284 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:18.284 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.284 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:18.544 Cannot find device "nvmf_tgt_br" 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:18.544 Cannot find device "nvmf_tgt_br2" 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:18.544 Cannot find device "nvmf_tgt_br" 00:15:18.544 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:15:18.545 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:18.545 Cannot find device "nvmf_tgt_br2" 00:15:18.545 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:15:18.545 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:18.545 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:18.545 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:18.545 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:18.545 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:18.545 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:18.545 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:18.545 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:18.545 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:18.545 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:18.545 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:18.545 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:18.545 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:18.545 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:18.545 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:18.545 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:18.545 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:18.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:18.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:15:18.804 00:15:18.804 --- 10.0.0.2 ping statistics --- 00:15:18.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.804 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:18.804 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:18.804 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:15:18.804 00:15:18.804 --- 10.0.0.3 ping statistics --- 00:15:18.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.804 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:18.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:18.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:18.804 00:15:18.804 --- 10.0.0.1 ping statistics --- 00:15:18.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.804 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74785 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74785 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 74785 ']' 00:15:18.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:18.804 08:10:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:18.804 [2024-06-10 08:10:40.575440] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:15:18.804 [2024-06-10 08:10:40.575528] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.063 [2024-06-10 08:10:40.713886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:19.063 [2024-06-10 08:10:40.830113] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.063 [2024-06-10 08:10:40.830461] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.063 [2024-06-10 08:10:40.830637] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.063 [2024-06-10 08:10:40.830776] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.063 [2024-06-10 08:10:40.830847] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.063 [2024-06-10 08:10:40.831099] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.063 [2024-06-10 08:10:40.831238] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.063 [2024-06-10 08:10:40.831321] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:19.063 [2024-06-10 08:10:40.831322] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.063 [2024-06-10 08:10:40.890126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:20.001 [2024-06-10 08:10:41.580906] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:20.001 Malloc0 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:20.001 [2024-06-10 08:10:41.690434] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:20.001 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:20.001 [ 00:15:20.001 { 00:15:20.001 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:20.001 "subtype": "Discovery", 00:15:20.001 "listen_addresses": [ 00:15:20.001 { 00:15:20.001 "trtype": "TCP", 00:15:20.001 "adrfam": "IPv4", 00:15:20.001 "traddr": "10.0.0.2", 00:15:20.001 "trsvcid": "4420" 00:15:20.001 } 00:15:20.001 ], 00:15:20.001 "allow_any_host": true, 00:15:20.001 "hosts": [] 00:15:20.001 }, 00:15:20.001 { 00:15:20.001 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.001 "subtype": "NVMe", 00:15:20.001 "listen_addresses": [ 00:15:20.001 { 00:15:20.001 "trtype": "TCP", 00:15:20.001 "adrfam": "IPv4", 00:15:20.001 "traddr": "10.0.0.2", 00:15:20.001 "trsvcid": "4420" 00:15:20.001 } 00:15:20.001 ], 00:15:20.001 "allow_any_host": true, 00:15:20.001 "hosts": [], 00:15:20.001 "serial_number": "SPDK00000000000001", 00:15:20.001 "model_number": "SPDK bdev Controller", 00:15:20.001 "max_namespaces": 32, 00:15:20.001 "min_cntlid": 1, 00:15:20.001 "max_cntlid": 65519, 00:15:20.001 "namespaces": [ 00:15:20.001 { 00:15:20.001 "nsid": 1, 00:15:20.001 "bdev_name": "Malloc0", 00:15:20.001 "name": "Malloc0", 00:15:20.001 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:20.001 "eui64": "ABCDEF0123456789", 00:15:20.001 "uuid": "43843df3-6407-4354-b088-7b95ca8915ce" 00:15:20.002 } 00:15:20.002 ] 00:15:20.002 } 00:15:20.002 ] 00:15:20.002 08:10:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:20.002 08:10:41 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:20.002 [2024-06-10 08:10:41.756577] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:15:20.002 [2024-06-10 08:10:41.756806] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74826 ] 00:15:20.268 [2024-06-10 08:10:41.892441] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:15:20.268 [2024-06-10 08:10:41.892544] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:20.268 [2024-06-10 08:10:41.892552] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:20.268 [2024-06-10 08:10:41.892563] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:20.268 [2024-06-10 08:10:41.892571] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:20.268 [2024-06-10 08:10:41.892738] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:15:20.269 [2024-06-10 08:10:41.892858] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2373a60 0 00:15:20.269 [2024-06-10 08:10:41.897855] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:20.269 [2024-06-10 08:10:41.897896] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:20.269 [2024-06-10 08:10:41.897906] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:20.269 [2024-06-10 08:10:41.897909] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:20.269 [2024-06-10 08:10:41.897954] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.269 [2024-06-10 08:10:41.897961] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.269 [2024-06-10 08:10:41.897965] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2373a60) 00:15:20.269 [2024-06-10 08:10:41.897997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:20.269 [2024-06-10 08:10:41.898034] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b67f0, cid 0, qid 0 00:15:20.269 [2024-06-10 08:10:41.905834] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.269 [2024-06-10 08:10:41.905855] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.269 [2024-06-10 08:10:41.905876] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.269 [2024-06-10 08:10:41.905881] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b67f0) on tqpair=0x2373a60 00:15:20.269 [2024-06-10 08:10:41.905892] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:20.269 [2024-06-10 08:10:41.905900] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:15:20.269 [2024-06-10 08:10:41.905905] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:15:20.269 [2024-06-10 08:10:41.905923] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.269 [2024-06-10 08:10:41.905929] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.269 [2024-06-10 08:10:41.905932] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2373a60) 00:15:20.269 [2024-06-10 08:10:41.905941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.269 [2024-06-10 08:10:41.905967] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b67f0, cid 0, qid 0 00:15:20.269 [2024-06-10 08:10:41.906028] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.269 [2024-06-10 08:10:41.906034] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.269 [2024-06-10 08:10:41.906038] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.269 [2024-06-10 08:10:41.906042] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b67f0) on tqpair=0x2373a60 00:15:20.269 [2024-06-10 08:10:41.906048] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:15:20.269 [2024-06-10 08:10:41.906055] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:15:20.269 [2024-06-10 08:10:41.906077] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.269 [2024-06-10 08:10:41.906097] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.269 [2024-06-10 08:10:41.906101] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2373a60) 00:15:20.269 [2024-06-10 08:10:41.906108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.269 [2024-06-10 08:10:41.906126] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b67f0, cid 0, qid 0 00:15:20.269 [2024-06-10 08:10:41.906166] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.269 [2024-06-10 08:10:41.906173] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.269 [2024-06-10 08:10:41.906176] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.269 [2024-06-10 08:10:41.906180] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b67f0) on tqpair=0x2373a60 00:15:20.269 [2024-06-10 08:10:41.906187] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:15:20.269 [2024-06-10 08:10:41.906195] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:15:20.269 [2024-06-10 08:10:41.906202] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.269 [2024-06-10 08:10:41.906206] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.269 [2024-06-10 08:10:41.906210] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2373a60) 00:15:20.269 [2024-06-10 08:10:41.906232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.269 [2024-06-10 08:10:41.906248] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b67f0, cid 0, qid 0 00:15:20.269 [2024-06-10 08:10:41.906290] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.269 [2024-06-10 08:10:41.906304] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.269 [2024-06-10 08:10:41.906308] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.269 [2024-06-10 08:10:41.906312] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b67f0) on tqpair=0x2373a60 00:15:20.269 [2024-06-10 08:10:41.906318] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:20.269 [2024-06-10 08:10:41.906328] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.269 [2024-06-10 08:10:41.906332] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.269 [2024-06-10 08:10:41.906336] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2373a60) 00:15:20.269 [2024-06-10 08:10:41.906343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.269 [2024-06-10 08:10:41.906358] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b67f0, cid 0, qid 0 00:15:20.269 [2024-06-10 08:10:41.906413] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.269 [2024-06-10 08:10:41.906419] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.269 [2024-06-10 08:10:41.906423] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.270 [2024-06-10 08:10:41.906426] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b67f0) on tqpair=0x2373a60 00:15:20.270 [2024-06-10 08:10:41.906432] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:15:20.270 [2024-06-10 08:10:41.906437] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:15:20.270 [2024-06-10 08:10:41.906444] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:20.270 [2024-06-10 08:10:41.906549] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:15:20.270 [2024-06-10 08:10:41.906554] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:20.270 [2024-06-10 08:10:41.906563] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.270 [2024-06-10 08:10:41.906567] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.270 [2024-06-10 08:10:41.906571] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2373a60) 00:15:20.270 [2024-06-10 08:10:41.906577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.270 [2024-06-10 08:10:41.906593] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b67f0, cid 0, qid 0 00:15:20.270 [2024-06-10 08:10:41.906636] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.270 [2024-06-10 08:10:41.906642] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.270 [2024-06-10 08:10:41.906645] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.270 [2024-06-10 08:10:41.906649] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b67f0) on tqpair=0x2373a60 00:15:20.270 [2024-06-10 08:10:41.906655] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:20.270 [2024-06-10 08:10:41.906664] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.270 [2024-06-10 08:10:41.906669] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.270 [2024-06-10 08:10:41.906672] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2373a60) 00:15:20.270 [2024-06-10 08:10:41.906679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.270 [2024-06-10 08:10:41.906694] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b67f0, cid 0, qid 0 00:15:20.270 [2024-06-10 08:10:41.906737] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.270 [2024-06-10 08:10:41.906744] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.270 [2024-06-10 08:10:41.906747] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.270 [2024-06-10 08:10:41.906751] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b67f0) on tqpair=0x2373a60 00:15:20.270 [2024-06-10 08:10:41.906758] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:20.270 [2024-06-10 08:10:41.906762] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:15:20.270 [2024-06-10 08:10:41.906770] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:15:20.270 [2024-06-10 08:10:41.906784] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:15:20.270 [2024-06-10 08:10:41.906795] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.270 [2024-06-10 08:10:41.906799] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2373a60) 00:15:20.270 [2024-06-10 08:10:41.906806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.270 [2024-06-10 08:10:41.906824] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b67f0, cid 0, qid 0 00:15:20.270 [2024-06-10 08:10:41.906908] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:20.270 [2024-06-10 08:10:41.906917] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:20.270 [2024-06-10 08:10:41.906920] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:20.270 [2024-06-10 08:10:41.906924] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2373a60): datao=0, datal=4096, cccid=0 00:15:20.270 [2024-06-10 08:10:41.906929] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b67f0) on tqpair(0x2373a60): expected_datao=0, payload_size=4096 00:15:20.270 [2024-06-10 08:10:41.906934] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.270 [2024-06-10 08:10:41.906942] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:20.270 [2024-06-10 08:10:41.906946] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:20.270 [2024-06-10 08:10:41.906955] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.270 [2024-06-10 08:10:41.906961] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.270 [2024-06-10 08:10:41.906965] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.270 [2024-06-10 08:10:41.906969] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b67f0) on tqpair=0x2373a60 00:15:20.270 [2024-06-10 08:10:41.906978] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:15:20.270 [2024-06-10 08:10:41.906983] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:15:20.270 [2024-06-10 08:10:41.906988] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:15:20.270 [2024-06-10 08:10:41.906993] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:15:20.270 [2024-06-10 08:10:41.906998] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:15:20.270 [2024-06-10 08:10:41.907003] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:15:20.270 [2024-06-10 08:10:41.907016] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:15:20.270 [2024-06-10 08:10:41.907026] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.270 [2024-06-10 08:10:41.907046] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.270 [2024-06-10 08:10:41.907049] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2373a60) 00:15:20.270 [2024-06-10 08:10:41.907057] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:20.270 [2024-06-10 08:10:41.907077] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b67f0, cid 0, qid 0 00:15:20.270 [2024-06-10 08:10:41.907127] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.270 [2024-06-10 08:10:41.907133] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.270 [2024-06-10 08:10:41.907137] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.270 [2024-06-10 08:10:41.907141] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b67f0) on tqpair=0x2373a60 00:15:20.270 [2024-06-10 08:10:41.907154] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907158] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907162] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2373a60) 00:15:20.271 [2024-06-10 08:10:41.907169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.271 [2024-06-10 08:10:41.907175] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907178] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907182] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2373a60) 00:15:20.271 [2024-06-10 08:10:41.907187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.271 [2024-06-10 08:10:41.907193] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907197] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907200] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2373a60) 00:15:20.271 [2024-06-10 08:10:41.907206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.271 [2024-06-10 08:10:41.907211] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907215] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907218] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2373a60) 00:15:20.271 [2024-06-10 08:10:41.907224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.271 [2024-06-10 08:10:41.907228] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:15:20.271 [2024-06-10 08:10:41.907236] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:20.271 [2024-06-10 08:10:41.907243] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907247] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2373a60) 00:15:20.271 [2024-06-10 08:10:41.907253] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.271 [2024-06-10 08:10:41.907272] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b67f0, cid 0, qid 0 00:15:20.271 [2024-06-10 08:10:41.907278] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6950, cid 1, qid 0 00:15:20.271 [2024-06-10 08:10:41.907283] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6ab0, cid 2, qid 0 00:15:20.271 [2024-06-10 08:10:41.907287] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6c10, cid 3, qid 0 00:15:20.271 [2024-06-10 08:10:41.907291] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6d70, cid 4, qid 0 00:15:20.271 [2024-06-10 08:10:41.907370] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.271 [2024-06-10 08:10:41.907376] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.271 [2024-06-10 08:10:41.907380] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907383] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b6d70) on tqpair=0x2373a60 00:15:20.271 [2024-06-10 08:10:41.907389] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:15:20.271 [2024-06-10 08:10:41.907398] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:15:20.271 [2024-06-10 08:10:41.907410] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907414] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2373a60) 00:15:20.271 [2024-06-10 08:10:41.907421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.271 [2024-06-10 08:10:41.907438] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6d70, cid 4, qid 0 00:15:20.271 [2024-06-10 08:10:41.907502] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:20.271 [2024-06-10 08:10:41.907517] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:20.271 [2024-06-10 08:10:41.907521] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907532] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2373a60): datao=0, datal=4096, cccid=4 00:15:20.271 [2024-06-10 08:10:41.907537] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b6d70) on tqpair(0x2373a60): expected_datao=0, payload_size=4096 00:15:20.271 [2024-06-10 08:10:41.907541] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907548] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907552] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907560] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.271 [2024-06-10 08:10:41.907566] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.271 [2024-06-10 08:10:41.907569] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907573] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b6d70) on tqpair=0x2373a60 00:15:20.271 [2024-06-10 08:10:41.907586] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:15:20.271 [2024-06-10 08:10:41.907614] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907620] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2373a60) 00:15:20.271 [2024-06-10 08:10:41.907626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.271 [2024-06-10 08:10:41.907633] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907637] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907641] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2373a60) 00:15:20.271 [2024-06-10 08:10:41.907646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.271 [2024-06-10 08:10:41.907669] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6d70, cid 4, qid 0 00:15:20.271 [2024-06-10 08:10:41.907676] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6ed0, cid 5, qid 0 00:15:20.271 [2024-06-10 08:10:41.907757] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:20.271 [2024-06-10 08:10:41.907773] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:20.271 [2024-06-10 08:10:41.907778] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907792] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2373a60): datao=0, datal=1024, cccid=4 00:15:20.271 [2024-06-10 08:10:41.907814] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b6d70) on tqpair(0x2373a60): expected_datao=0, payload_size=1024 00:15:20.271 [2024-06-10 08:10:41.907819] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907826] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907830] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:20.271 [2024-06-10 08:10:41.907836] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.271 [2024-06-10 08:10:41.907841] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.272 [2024-06-10 08:10:41.907845] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.272 [2024-06-10 08:10:41.907849] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b6ed0) on tqpair=0x2373a60 00:15:20.272 [2024-06-10 08:10:41.907869] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.272 [2024-06-10 08:10:41.907876] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.272 [2024-06-10 08:10:41.907880] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.272 [2024-06-10 08:10:41.907883] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b6d70) on tqpair=0x2373a60 00:15:20.272 [2024-06-10 08:10:41.907897] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.272 [2024-06-10 08:10:41.907901] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2373a60) 00:15:20.272 [2024-06-10 08:10:41.907908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.272 [2024-06-10 08:10:41.907932] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6d70, cid 4, qid 0 00:15:20.272 [2024-06-10 08:10:41.907994] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:20.272 [2024-06-10 08:10:41.908001] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:20.272 [2024-06-10 08:10:41.908004] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:20.272 [2024-06-10 08:10:41.908008] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2373a60): datao=0, datal=3072, cccid=4 00:15:20.272 [2024-06-10 08:10:41.908013] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b6d70) on tqpair(0x2373a60): expected_datao=0, payload_size=3072 00:15:20.272 [2024-06-10 08:10:41.908017] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.272 [2024-06-10 08:10:41.908024] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:20.272 [2024-06-10 08:10:41.908028] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:20.272 [2024-06-10 08:10:41.908036] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.272 [2024-06-10 08:10:41.908042] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.272 [2024-06-10 08:10:41.908045] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.272 [2024-06-10 08:10:41.908049] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b6d70) on tqpair=0x2373a60 00:15:20.272 [2024-06-10 08:10:41.908059] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.272 [2024-06-10 08:10:41.908064] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2373a60) 00:15:20.272 [2024-06-10 08:10:41.908071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.272 [2024-06-10 08:10:41.908092] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6d70, cid 4, qid 0 00:15:20.272 [2024-06-10 08:10:41.908169] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:20.272 [2024-06-10 08:10:41.908175] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:20.272 [2024-06-10 08:10:41.908179] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:20.272 [2024-06-10 08:10:41.908182] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2373a60): datao=0, datal=8, cccid=4 00:15:20.272 [2024-06-10 08:10:41.908186] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b6d70) on tqpair(0x2373a60): expected_datao=0, payload_size=8 00:15:20.272 [2024-06-10 08:10:41.908191] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.272 ===================================================== 00:15:20.272 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:20.272 ===================================================== 00:15:20.272 Controller Capabilities/Features 00:15:20.272 ================================ 00:15:20.272 Vendor ID: 0000 00:15:20.272 Subsystem Vendor ID: 0000 00:15:20.272 Serial Number: .................... 00:15:20.272 Model Number: ........................................ 00:15:20.272 Firmware Version: 24.09 00:15:20.272 Recommended Arb Burst: 0 00:15:20.272 IEEE OUI Identifier: 00 00 00 00:15:20.272 Multi-path I/O 00:15:20.272 May have multiple subsystem ports: No 00:15:20.272 May have multiple controllers: No 00:15:20.272 Associated with SR-IOV VF: No 00:15:20.272 Max Data Transfer Size: 131072 00:15:20.272 Max Number of Namespaces: 0 00:15:20.272 Max Number of I/O Queues: 1024 00:15:20.272 NVMe Specification Version (VS): 1.3 00:15:20.272 NVMe Specification Version (Identify): 1.3 00:15:20.272 Maximum Queue Entries: 128 00:15:20.272 Contiguous Queues Required: Yes 00:15:20.272 Arbitration Mechanisms Supported 00:15:20.272 Weighted Round Robin: Not Supported 00:15:20.272 Vendor Specific: Not Supported 00:15:20.272 Reset Timeout: 15000 ms 00:15:20.272 Doorbell Stride: 4 bytes 00:15:20.272 NVM Subsystem Reset: Not Supported 00:15:20.272 Command Sets Supported 00:15:20.272 NVM Command Set: Supported 00:15:20.272 Boot Partition: Not Supported 00:15:20.272 Memory Page Size Minimum: 4096 bytes 00:15:20.272 Memory Page Size Maximum: 4096 bytes 00:15:20.272 Persistent Memory Region: Not Supported 00:15:20.272 Optional Asynchronous Events Supported 00:15:20.272 Namespace Attribute Notices: Not Supported 00:15:20.272 Firmware Activation Notices: Not Supported 00:15:20.272 ANA Change Notices: Not Supported 00:15:20.272 PLE Aggregate Log Change Notices: Not Supported 00:15:20.272 LBA Status Info Alert Notices: Not Supported 00:15:20.272 EGE Aggregate Log Change Notices: Not Supported 00:15:20.272 Normal NVM Subsystem Shutdown event: Not Supported 00:15:20.272 Zone Descriptor Change Notices: Not Supported 00:15:20.272 Discovery Log Change Notices: Supported 00:15:20.272 Controller Attributes 00:15:20.272 128-bit Host Identifier: Not Supported 00:15:20.272 Non-Operational Permissive Mode: Not Supported 00:15:20.272 NVM Sets: Not Supported 00:15:20.272 Read Recovery Levels: Not Supported 00:15:20.272 Endurance Groups: Not Supported 00:15:20.272 Predictable Latency Mode: Not Supported 00:15:20.272 Traffic Based Keep ALive: Not Supported 00:15:20.272 Namespace Granularity: Not Supported 00:15:20.272 SQ Associations: Not Supported 00:15:20.272 UUID List: Not Supported 00:15:20.272 Multi-Domain Subsystem: Not Supported 00:15:20.272 Fixed Capacity Management: Not Supported 00:15:20.272 Variable Capacity Management: Not Supported 00:15:20.272 Delete Endurance Group: Not Supported 00:15:20.272 Delete NVM Set: Not Supported 00:15:20.272 Extended LBA Formats Supported: Not Supported 00:15:20.272 Flexible Data Placement Supported: Not Supported 00:15:20.272 00:15:20.272 Controller Memory Buffer Support 00:15:20.272 ================================ 00:15:20.273 Supported: No 00:15:20.273 00:15:20.273 Persistent Memory Region Support 00:15:20.273 ================================ 00:15:20.273 Supported: No 00:15:20.273 00:15:20.273 Admin Command Set Attributes 00:15:20.273 ============================ 00:15:20.273 Security Send/Receive: Not Supported 00:15:20.273 Format NVM: Not Supported 00:15:20.273 Firmware Activate/Download: Not Supported 00:15:20.273 Namespace Management: Not Supported 00:15:20.273 Device Self-Test: Not Supported 00:15:20.273 Directives: Not Supported 00:15:20.273 NVMe-MI: Not Supported 00:15:20.273 Virtualization Management: Not Supported 00:15:20.273 Doorbell Buffer Config: Not Supported 00:15:20.273 Get LBA Status Capability: Not Supported 00:15:20.273 Command & Feature Lockdown Capability: Not Supported 00:15:20.273 Abort Command Limit: 1 00:15:20.273 Async Event Request Limit: 4 00:15:20.273 Number of Firmware Slots: N/A 00:15:20.273 Firmware Slot 1 Read-Only: N/A 00:15:20.273 Firmware Activation Without Reset: N/A 00:15:20.273 Multiple Update Detection Support: N/A 00:15:20.273 Firmware Update Granularity: No Information Provided 00:15:20.273 Per-Namespace SMART Log: No 00:15:20.273 Asymmetric Namespace Access Log Page: Not Supported 00:15:20.273 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:20.273 Command Effects Log Page: Not Supported 00:15:20.273 Get Log Page Extended Data: Supported 00:15:20.273 Telemetry Log Pages: Not Supported 00:15:20.273 Persistent Event Log Pages: Not Supported 00:15:20.273 Supported Log Pages Log Page: May Support 00:15:20.273 Commands Supported & Effects Log Page: Not Supported 00:15:20.273 Feature Identifiers & Effects Log Page:May Support 00:15:20.273 NVMe-MI Commands & Effects Log Page: May Support 00:15:20.273 Data Area 4 for Telemetry Log: Not Supported 00:15:20.273 Error Log Page Entries Supported: 128 00:15:20.273 Keep Alive: Not Supported 00:15:20.273 00:15:20.273 NVM Command Set Attributes 00:15:20.273 ========================== 00:15:20.273 Submission Queue Entry Size 00:15:20.273 Max: 1 00:15:20.273 Min: 1 00:15:20.273 Completion Queue Entry Size 00:15:20.273 Max: 1 00:15:20.273 Min: 1 00:15:20.273 Number of Namespaces: 0 00:15:20.273 Compare Command: Not Supported 00:15:20.273 Write Uncorrectable Command: Not Supported 00:15:20.273 Dataset Management Command: Not Supported 00:15:20.273 Write Zeroes Command: Not Supported 00:15:20.273 Set Features Save Field: Not Supported 00:15:20.273 Reservations: Not Supported 00:15:20.273 Timestamp: Not Supported 00:15:20.273 Copy: Not Supported 00:15:20.273 Volatile Write Cache: Not Present 00:15:20.273 Atomic Write Unit (Normal): 1 00:15:20.273 Atomic Write Unit (PFail): 1 00:15:20.273 Atomic Compare & Write Unit: 1 00:15:20.273 Fused Compare & Write: Supported 00:15:20.273 Scatter-Gather List 00:15:20.273 SGL Command Set: Supported 00:15:20.273 SGL Keyed: Supported 00:15:20.273 SGL Bit Bucket Descriptor: Not Supported 00:15:20.273 SGL Metadata Pointer: Not Supported 00:15:20.273 Oversized SGL: Not Supported 00:15:20.273 SGL Metadata Address: Not Supported 00:15:20.273 SGL Offset: Supported 00:15:20.273 Transport SGL Data Block: Not Supported 00:15:20.273 Replay Protected Memory Block: Not Supported 00:15:20.273 00:15:20.273 Firmware Slot Information 00:15:20.273 ========================= 00:15:20.273 Active slot: 0 00:15:20.273 00:15:20.273 00:15:20.273 Error Log 00:15:20.273 ========= 00:15:20.273 00:15:20.273 Active Namespaces 00:15:20.273 ================= 00:15:20.273 Discovery Log Page 00:15:20.273 ================== 00:15:20.273 Generation Counter: 2 00:15:20.273 Number of Records: 2 00:15:20.273 Record Format: 0 00:15:20.273 00:15:20.273 Discovery Log Entry 0 00:15:20.273 ---------------------- 00:15:20.273 Transport Type: 3 (TCP) 00:15:20.273 Address Family: 1 (IPv4) 00:15:20.273 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:20.273 Entry Flags: 00:15:20.273 Duplicate Returned Information: 1 00:15:20.273 Explicit Persistent Connection Support for Discovery: 1 00:15:20.273 Transport Requirements: 00:15:20.273 Secure Channel: Not Required 00:15:20.273 Port ID: 0 (0x0000) 00:15:20.273 Controller ID: 65535 (0xffff) 00:15:20.273 Admin Max SQ Size: 128 00:15:20.273 Transport Service Identifier: 4420 00:15:20.273 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:20.273 Transport Address: 10.0.0.2 00:15:20.273 Discovery Log Entry 1 00:15:20.273 ---------------------- 00:15:20.273 Transport Type: 3 (TCP) 00:15:20.273 Address Family: 1 (IPv4) 00:15:20.273 Subsystem Type: 2 (NVM Subsystem) 00:15:20.274 Entry Flags: 00:15:20.274 Duplicate Returned Information: 0 00:15:20.274 Explicit Persistent Connection Support for Discovery: 0 00:15:20.274 Transport Requirements: 00:15:20.274 Secure Channel: Not Required 00:15:20.274 Port ID: 0 (0x0000) 00:15:20.274 Controller ID: 65535 (0xffff) 00:15:20.274 Admin Max SQ Size: 128 00:15:20.274 Transport Service Identifier: 4420 00:15:20.274 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:20.274 Transport Address: 10.0.0.2 [2024-06-10 08:10:41.908197] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:20.274 [2024-06-10 08:10:41.908201] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:20.274 [2024-06-10 08:10:41.908216] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.274 [2024-06-10 08:10:41.908222] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.274 [2024-06-10 08:10:41.908226] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.274 [2024-06-10 08:10:41.908229] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b6d70) on tqpair=0x2373a60 00:15:20.274 [2024-06-10 08:10:41.908319] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:15:20.274 [2024-06-10 08:10:41.908334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.274 [2024-06-10 08:10:41.908341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.274 [2024-06-10 08:10:41.908346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.274 [2024-06-10 08:10:41.908352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.274 [2024-06-10 08:10:41.908361] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.274 [2024-06-10 08:10:41.908365] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.274 [2024-06-10 08:10:41.908368] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2373a60) 00:15:20.274 [2024-06-10 08:10:41.908375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.274 [2024-06-10 08:10:41.908396] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6c10, cid 3, qid 0 00:15:20.274 [2024-06-10 08:10:41.908443] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.274 [2024-06-10 08:10:41.908476] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.274 [2024-06-10 08:10:41.908481] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.274 [2024-06-10 08:10:41.908485] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b6c10) on tqpair=0x2373a60 00:15:20.274 [2024-06-10 08:10:41.908493] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.274 [2024-06-10 08:10:41.908497] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.274 [2024-06-10 08:10:41.908501] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2373a60) 00:15:20.274 [2024-06-10 08:10:41.908508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.274 [2024-06-10 08:10:41.908530] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6c10, cid 3, qid 0 00:15:20.274 [2024-06-10 08:10:41.908591] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.274 [2024-06-10 08:10:41.908597] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.274 [2024-06-10 08:10:41.908601] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.274 [2024-06-10 08:10:41.908605] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b6c10) on tqpair=0x2373a60 00:15:20.274 [2024-06-10 08:10:41.908611] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:15:20.274 [2024-06-10 08:10:41.908615] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:15:20.274 [2024-06-10 08:10:41.908624] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.274 [2024-06-10 08:10:41.908629] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.274 [2024-06-10 08:10:41.908632] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2373a60) 00:15:20.274 [2024-06-10 08:10:41.908639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.274 [2024-06-10 08:10:41.908656] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6c10, cid 3, qid 0 00:15:20.274 [2024-06-10 08:10:41.908702] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.274 [2024-06-10 08:10:41.908708] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.274 [2024-06-10 08:10:41.908712] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.274 [2024-06-10 08:10:41.908716] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b6c10) on tqpair=0x2373a60 00:15:20.274 [2024-06-10 08:10:41.908727] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.274 [2024-06-10 08:10:41.908731] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.274 [2024-06-10 08:10:41.908734] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2373a60) 00:15:20.274 [2024-06-10 08:10:41.908741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.274 [2024-06-10 08:10:41.908757] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6c10, cid 3, qid 0 00:15:20.274 [2024-06-10 08:10:41.908828] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.274 [2024-06-10 08:10:41.908836] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.274 [2024-06-10 08:10:41.908839] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.274 [2024-06-10 08:10:41.908843] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b6c10) on tqpair=0x2373a60 00:15:20.275 [2024-06-10 08:10:41.908854] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.275 [2024-06-10 08:10:41.908859] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.275 [2024-06-10 08:10:41.908862] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2373a60) 00:15:20.275 [2024-06-10 08:10:41.908869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.275 [2024-06-10 08:10:41.908887] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6c10, cid 3, qid 0 00:15:20.275 [2024-06-10 08:10:41.908928] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.275 [2024-06-10 08:10:41.908934] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.275 [2024-06-10 08:10:41.908938] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.275 [2024-06-10 08:10:41.908941] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b6c10) on tqpair=0x2373a60 00:15:20.275 [2024-06-10 08:10:41.908952] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.275 [2024-06-10 08:10:41.908956] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.275 [2024-06-10 08:10:41.908959] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2373a60) 00:15:20.275 [2024-06-10 08:10:41.908966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.275 [2024-06-10 08:10:41.908982] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6c10, cid 3, qid 0 00:15:20.275 [2024-06-10 08:10:41.909021] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.275 [2024-06-10 08:10:41.909027] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.275 [2024-06-10 08:10:41.909031] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.275 [2024-06-10 08:10:41.909035] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b6c10) on tqpair=0x2373a60 00:15:20.275 [2024-06-10 08:10:41.909045] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.275 [2024-06-10 08:10:41.909049] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.275 [2024-06-10 08:10:41.909052] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2373a60) 00:15:20.275 [2024-06-10 08:10:41.909059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.275 [2024-06-10 08:10:41.909075] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6c10, cid 3, qid 0 00:15:20.275 [2024-06-10 08:10:41.909119] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.275 [2024-06-10 08:10:41.909125] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.275 [2024-06-10 08:10:41.909129] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.275 [2024-06-10 08:10:41.909133] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b6c10) on tqpair=0x2373a60 00:15:20.275 [2024-06-10 08:10:41.909143] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.275 [2024-06-10 08:10:41.909147] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.275 [2024-06-10 08:10:41.909150] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2373a60) 00:15:20.275 [2024-06-10 08:10:41.909157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.275 [2024-06-10 08:10:41.909172] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6c10, cid 3, qid 0 00:15:20.275 [2024-06-10 08:10:41.909211] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.275 [2024-06-10 08:10:41.909217] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.275 [2024-06-10 08:10:41.909221] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.275 [2024-06-10 08:10:41.909224] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b6c10) on tqpair=0x2373a60 00:15:20.275 [2024-06-10 08:10:41.909235] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.275 [2024-06-10 08:10:41.909239] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.275 [2024-06-10 08:10:41.909242] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2373a60) 00:15:20.275 [2024-06-10 08:10:41.909249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.275 [2024-06-10 08:10:41.909264] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6c10, cid 3, qid 0 00:15:20.275 [2024-06-10 08:10:41.909304] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.275 [2024-06-10 08:10:41.909310] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.275 [2024-06-10 08:10:41.909313] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.275 [2024-06-10 08:10:41.909317] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b6c10) on tqpair=0x2373a60 00:15:20.275 [2024-06-10 08:10:41.909327] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.275 [2024-06-10 08:10:41.909331] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.275 [2024-06-10 08:10:41.909335] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2373a60) 00:15:20.275 [2024-06-10 08:10:41.909341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.275 [2024-06-10 08:10:41.909357] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6c10, cid 3, qid 0 00:15:20.275 [2024-06-10 08:10:41.909397] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.275 [2024-06-10 08:10:41.909403] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.275 [2024-06-10 08:10:41.909406] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.275 [2024-06-10 08:10:41.909410] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b6c10) on tqpair=0x2373a60 00:15:20.275 [2024-06-10 08:10:41.909420] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.275 [2024-06-10 08:10:41.909424] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.275 [2024-06-10 08:10:41.909428] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2373a60) 00:15:20.275 [2024-06-10 08:10:41.909434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.275 [2024-06-10 08:10:41.909450] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6c10, cid 3, qid 0 00:15:20.276 [2024-06-10 08:10:41.909492] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.276 [2024-06-10 08:10:41.909498] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.276 [2024-06-10 08:10:41.909502] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.276 [2024-06-10 08:10:41.909505] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b6c10) on tqpair=0x2373a60 00:15:20.276 [2024-06-10 08:10:41.909516] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.276 [2024-06-10 08:10:41.909520] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.276 [2024-06-10 08:10:41.909524] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2373a60) 00:15:20.276 [2024-06-10 08:10:41.909530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.276 [2024-06-10 08:10:41.909546] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6c10, cid 3, qid 0 00:15:20.276 [2024-06-10 08:10:41.909591] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.276 [2024-06-10 08:10:41.909596] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.276 [2024-06-10 08:10:41.909600] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.276 [2024-06-10 08:10:41.909604] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b6c10) on tqpair=0x2373a60 00:15:20.276 [2024-06-10 08:10:41.909614] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.276 [2024-06-10 08:10:41.909618] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.276 [2024-06-10 08:10:41.909621] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2373a60) 00:15:20.276 [2024-06-10 08:10:41.909628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.276 [2024-06-10 08:10:41.909644] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6c10, cid 3, qid 0 00:15:20.276 [2024-06-10 08:10:41.909687] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.276 [2024-06-10 08:10:41.909693] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.276 [2024-06-10 08:10:41.909696] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.276 [2024-06-10 08:10:41.909700] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b6c10) on tqpair=0x2373a60 00:15:20.276 [2024-06-10 08:10:41.909710] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.276 [2024-06-10 08:10:41.909714] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.276 [2024-06-10 08:10:41.909718] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2373a60) 00:15:20.276 [2024-06-10 08:10:41.909725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.276 [2024-06-10 08:10:41.909740] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6c10, cid 3, qid 0 00:15:20.276 [2024-06-10 08:10:41.909778] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.276 [2024-06-10 08:10:41.915944] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.276 [2024-06-10 08:10:41.915967] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.276 [2024-06-10 08:10:41.915972] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b6c10) on tqpair=0x2373a60 00:15:20.276 [2024-06-10 08:10:41.916003] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.276 [2024-06-10 08:10:41.916008] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.276 [2024-06-10 08:10:41.916012] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2373a60) 00:15:20.276 [2024-06-10 08:10:41.916020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.276 [2024-06-10 08:10:41.916045] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b6c10, cid 3, qid 0 00:15:20.276 [2024-06-10 08:10:41.916093] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.276 [2024-06-10 08:10:41.916100] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.276 [2024-06-10 08:10:41.916103] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.276 [2024-06-10 08:10:41.916107] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b6c10) on tqpair=0x2373a60 00:15:20.276 [2024-06-10 08:10:41.916116] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:15:20.276 00:15:20.276 08:10:41 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:20.276 [2024-06-10 08:10:41.959694] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:15:20.276 [2024-06-10 08:10:41.959743] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74829 ] 00:15:20.276 [2024-06-10 08:10:42.098560] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:15:20.276 [2024-06-10 08:10:42.098652] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:20.276 [2024-06-10 08:10:42.098665] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:20.277 [2024-06-10 08:10:42.098676] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:20.277 [2024-06-10 08:10:42.098683] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:20.277 [2024-06-10 08:10:42.098849] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:15:20.277 [2024-06-10 08:10:42.098928] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ef8a60 0 00:15:20.277 [2024-06-10 08:10:42.104859] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:20.277 [2024-06-10 08:10:42.104903] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:20.277 [2024-06-10 08:10:42.104912] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:20.277 [2024-06-10 08:10:42.104916] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:20.277 [2024-06-10 08:10:42.104961] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.277 [2024-06-10 08:10:42.104969] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.277 [2024-06-10 08:10:42.104974] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef8a60) 00:15:20.277 [2024-06-10 08:10:42.104988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:20.277 [2024-06-10 08:10:42.105026] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3b7f0, cid 0, qid 0 00:15:20.277 [2024-06-10 08:10:42.112846] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.277 [2024-06-10 08:10:42.112865] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.277 [2024-06-10 08:10:42.112886] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.277 [2024-06-10 08:10:42.112891] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3b7f0) on tqpair=0x1ef8a60 00:15:20.277 [2024-06-10 08:10:42.112906] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:20.277 [2024-06-10 08:10:42.112915] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:15:20.277 [2024-06-10 08:10:42.112921] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:15:20.277 [2024-06-10 08:10:42.112935] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.277 [2024-06-10 08:10:42.112941] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.277 [2024-06-10 08:10:42.112945] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef8a60) 00:15:20.277 [2024-06-10 08:10:42.112954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.277 [2024-06-10 08:10:42.112979] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3b7f0, cid 0, qid 0 00:15:20.277 [2024-06-10 08:10:42.113047] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.277 [2024-06-10 08:10:42.113054] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.277 [2024-06-10 08:10:42.113058] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.277 [2024-06-10 08:10:42.113062] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3b7f0) on tqpair=0x1ef8a60 00:15:20.277 [2024-06-10 08:10:42.113068] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:15:20.277 [2024-06-10 08:10:42.113075] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:15:20.277 [2024-06-10 08:10:42.113083] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.277 [2024-06-10 08:10:42.113087] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.277 [2024-06-10 08:10:42.113091] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef8a60) 00:15:20.277 [2024-06-10 08:10:42.113114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.277 [2024-06-10 08:10:42.113134] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3b7f0, cid 0, qid 0 00:15:20.277 [2024-06-10 08:10:42.113181] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.277 [2024-06-10 08:10:42.113188] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.277 [2024-06-10 08:10:42.113192] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.277 [2024-06-10 08:10:42.113196] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3b7f0) on tqpair=0x1ef8a60 00:15:20.277 [2024-06-10 08:10:42.113202] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:15:20.277 [2024-06-10 08:10:42.113211] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:15:20.277 [2024-06-10 08:10:42.113218] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.277 [2024-06-10 08:10:42.113223] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.277 [2024-06-10 08:10:42.113227] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef8a60) 00:15:20.277 [2024-06-10 08:10:42.113234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.277 [2024-06-10 08:10:42.113252] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3b7f0, cid 0, qid 0 00:15:20.277 [2024-06-10 08:10:42.113300] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.277 [2024-06-10 08:10:42.113307] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.277 [2024-06-10 08:10:42.113311] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.277 [2024-06-10 08:10:42.113315] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3b7f0) on tqpair=0x1ef8a60 00:15:20.277 [2024-06-10 08:10:42.113321] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:20.277 [2024-06-10 08:10:42.113331] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.277 [2024-06-10 08:10:42.113336] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.277 [2024-06-10 08:10:42.113340] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef8a60) 00:15:20.277 [2024-06-10 08:10:42.113348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.277 [2024-06-10 08:10:42.113367] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3b7f0, cid 0, qid 0 00:15:20.277 [2024-06-10 08:10:42.113414] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.277 [2024-06-10 08:10:42.113421] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.277 [2024-06-10 08:10:42.113425] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.277 [2024-06-10 08:10:42.113429] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3b7f0) on tqpair=0x1ef8a60 00:15:20.278 [2024-06-10 08:10:42.113435] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:15:20.278 [2024-06-10 08:10:42.113440] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:15:20.278 [2024-06-10 08:10:42.113448] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:20.278 [2024-06-10 08:10:42.113553] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:15:20.278 [2024-06-10 08:10:42.113558] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:20.278 [2024-06-10 08:10:42.113567] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.278 [2024-06-10 08:10:42.113571] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.278 [2024-06-10 08:10:42.113575] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef8a60) 00:15:20.278 [2024-06-10 08:10:42.113582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.278 [2024-06-10 08:10:42.113601] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3b7f0, cid 0, qid 0 00:15:20.278 [2024-06-10 08:10:42.113651] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.278 [2024-06-10 08:10:42.113657] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.278 [2024-06-10 08:10:42.113661] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.278 [2024-06-10 08:10:42.113665] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3b7f0) on tqpair=0x1ef8a60 00:15:20.278 [2024-06-10 08:10:42.113671] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:20.278 [2024-06-10 08:10:42.113681] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.278 [2024-06-10 08:10:42.113686] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.278 [2024-06-10 08:10:42.113690] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef8a60) 00:15:20.278 [2024-06-10 08:10:42.113697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.278 [2024-06-10 08:10:42.113716] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3b7f0, cid 0, qid 0 00:15:20.278 [2024-06-10 08:10:42.113761] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.278 [2024-06-10 08:10:42.113768] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.278 [2024-06-10 08:10:42.113772] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.278 [2024-06-10 08:10:42.113776] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3b7f0) on tqpair=0x1ef8a60 00:15:20.278 [2024-06-10 08:10:42.113782] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:20.278 [2024-06-10 08:10:42.113787] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:15:20.278 [2024-06-10 08:10:42.113810] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:15:20.278 [2024-06-10 08:10:42.113825] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:15:20.278 [2024-06-10 08:10:42.113850] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.278 [2024-06-10 08:10:42.113855] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef8a60) 00:15:20.278 [2024-06-10 08:10:42.113863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.278 [2024-06-10 08:10:42.113885] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3b7f0, cid 0, qid 0 00:15:20.278 [2024-06-10 08:10:42.113975] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:20.278 [2024-06-10 08:10:42.113983] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:20.278 [2024-06-10 08:10:42.113987] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:20.278 [2024-06-10 08:10:42.113991] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef8a60): datao=0, datal=4096, cccid=0 00:15:20.278 [2024-06-10 08:10:42.113996] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f3b7f0) on tqpair(0x1ef8a60): expected_datao=0, payload_size=4096 00:15:20.278 [2024-06-10 08:10:42.114001] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.278 [2024-06-10 08:10:42.114009] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:20.278 [2024-06-10 08:10:42.114014] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:20.278 [2024-06-10 08:10:42.114023] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.278 [2024-06-10 08:10:42.114029] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.278 [2024-06-10 08:10:42.114032] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.278 [2024-06-10 08:10:42.114036] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3b7f0) on tqpair=0x1ef8a60 00:15:20.278 [2024-06-10 08:10:42.114046] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:15:20.278 [2024-06-10 08:10:42.114052] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:15:20.278 [2024-06-10 08:10:42.114056] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:15:20.278 [2024-06-10 08:10:42.114060] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:15:20.278 [2024-06-10 08:10:42.114065] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:15:20.278 [2024-06-10 08:10:42.114070] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:15:20.278 [2024-06-10 08:10:42.114083] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:15:20.278 [2024-06-10 08:10:42.114094] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.278 [2024-06-10 08:10:42.114099] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.278 [2024-06-10 08:10:42.114103] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef8a60) 00:15:20.278 [2024-06-10 08:10:42.114111] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:20.278 [2024-06-10 08:10:42.114138] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3b7f0, cid 0, qid 0 00:15:20.278 [2024-06-10 08:10:42.114202] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.278 [2024-06-10 08:10:42.114209] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.278 [2024-06-10 08:10:42.114213] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.278 [2024-06-10 08:10:42.114217] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3b7f0) on tqpair=0x1ef8a60 00:15:20.278 [2024-06-10 08:10:42.114229] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.278 [2024-06-10 08:10:42.114235] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.278 [2024-06-10 08:10:42.114239] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef8a60) 00:15:20.278 [2024-06-10 08:10:42.114245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.278 [2024-06-10 08:10:42.114252] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.278 [2024-06-10 08:10:42.114256] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.279 [2024-06-10 08:10:42.114260] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ef8a60) 00:15:20.279 [2024-06-10 08:10:42.114265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.279 [2024-06-10 08:10:42.114271] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.279 [2024-06-10 08:10:42.114275] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.279 [2024-06-10 08:10:42.114279] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ef8a60) 00:15:20.279 [2024-06-10 08:10:42.114284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.279 [2024-06-10 08:10:42.114290] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.279 [2024-06-10 08:10:42.114294] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.279 [2024-06-10 08:10:42.114298] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef8a60) 00:15:20.279 [2024-06-10 08:10:42.114303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.279 [2024-06-10 08:10:42.114309] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:20.279 [2024-06-10 08:10:42.114317] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:20.279 [2024-06-10 08:10:42.114324] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.279 [2024-06-10 08:10:42.114329] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef8a60) 00:15:20.279 [2024-06-10 08:10:42.114336] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.279 [2024-06-10 08:10:42.114356] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3b7f0, cid 0, qid 0 00:15:20.279 [2024-06-10 08:10:42.114363] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3b950, cid 1, qid 0 00:15:20.279 [2024-06-10 08:10:42.114368] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3bab0, cid 2, qid 0 00:15:20.279 [2024-06-10 08:10:42.114373] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3bc10, cid 3, qid 0 00:15:20.279 [2024-06-10 08:10:42.114378] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3bd70, cid 4, qid 0 00:15:20.279 [2024-06-10 08:10:42.114456] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.279 [2024-06-10 08:10:42.114463] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.279 [2024-06-10 08:10:42.114467] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.279 [2024-06-10 08:10:42.114470] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3bd70) on tqpair=0x1ef8a60 00:15:20.279 [2024-06-10 08:10:42.114477] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:15:20.279 [2024-06-10 08:10:42.114486] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:20.279 [2024-06-10 08:10:42.114495] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:15:20.279 [2024-06-10 08:10:42.114501] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:20.279 [2024-06-10 08:10:42.114508] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.279 [2024-06-10 08:10:42.114513] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.279 [2024-06-10 08:10:42.114517] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef8a60) 00:15:20.279 [2024-06-10 08:10:42.114524] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:20.279 [2024-06-10 08:10:42.114543] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3bd70, cid 4, qid 0 00:15:20.279 [2024-06-10 08:10:42.114602] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.279 [2024-06-10 08:10:42.114609] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.279 [2024-06-10 08:10:42.114612] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.279 [2024-06-10 08:10:42.114616] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3bd70) on tqpair=0x1ef8a60 00:15:20.279 [2024-06-10 08:10:42.114664] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:15:20.279 [2024-06-10 08:10:42.114675] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:20.279 [2024-06-10 08:10:42.114683] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.279 [2024-06-10 08:10:42.114688] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef8a60) 00:15:20.279 [2024-06-10 08:10:42.114695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.279 [2024-06-10 08:10:42.114714] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3bd70, cid 4, qid 0 00:15:20.279 [2024-06-10 08:10:42.114778] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:20.279 [2024-06-10 08:10:42.114785] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:20.279 [2024-06-10 08:10:42.114789] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:20.279 [2024-06-10 08:10:42.114792] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef8a60): datao=0, datal=4096, cccid=4 00:15:20.279 [2024-06-10 08:10:42.114810] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f3bd70) on tqpair(0x1ef8a60): expected_datao=0, payload_size=4096 00:15:20.279 [2024-06-10 08:10:42.114816] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.279 [2024-06-10 08:10:42.114823] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:20.279 [2024-06-10 08:10:42.114827] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:20.279 [2024-06-10 08:10:42.114836] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.279 [2024-06-10 08:10:42.114842] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.279 [2024-06-10 08:10:42.114846] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.279 [2024-06-10 08:10:42.114850] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3bd70) on tqpair=0x1ef8a60 00:15:20.279 [2024-06-10 08:10:42.114867] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:15:20.279 [2024-06-10 08:10:42.114878] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:15:20.279 [2024-06-10 08:10:42.114888] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:15:20.279 [2024-06-10 08:10:42.114896] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.279 [2024-06-10 08:10:42.114900] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef8a60) 00:15:20.280 [2024-06-10 08:10:42.114908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.280 [2024-06-10 08:10:42.114929] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3bd70, cid 4, qid 0 00:15:20.280 [2024-06-10 08:10:42.115002] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:20.280 [2024-06-10 08:10:42.115009] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:20.280 [2024-06-10 08:10:42.115013] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:20.280 [2024-06-10 08:10:42.115016] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef8a60): datao=0, datal=4096, cccid=4 00:15:20.280 [2024-06-10 08:10:42.115021] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f3bd70) on tqpair(0x1ef8a60): expected_datao=0, payload_size=4096 00:15:20.280 [2024-06-10 08:10:42.115025] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.280 [2024-06-10 08:10:42.115032] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:20.280 [2024-06-10 08:10:42.115036] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:20.280 [2024-06-10 08:10:42.115044] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.280 [2024-06-10 08:10:42.115050] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.280 [2024-06-10 08:10:42.115054] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.280 [2024-06-10 08:10:42.115058] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3bd70) on tqpair=0x1ef8a60 00:15:20.280 [2024-06-10 08:10:42.115070] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:20.280 [2024-06-10 08:10:42.115080] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:20.280 [2024-06-10 08:10:42.115088] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.280 [2024-06-10 08:10:42.115093] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef8a60) 00:15:20.280 [2024-06-10 08:10:42.115100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.280 [2024-06-10 08:10:42.115120] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3bd70, cid 4, qid 0 00:15:20.280 [2024-06-10 08:10:42.115184] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:20.280 [2024-06-10 08:10:42.115192] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:20.280 [2024-06-10 08:10:42.115195] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:20.280 [2024-06-10 08:10:42.115199] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef8a60): datao=0, datal=4096, cccid=4 00:15:20.280 [2024-06-10 08:10:42.115203] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f3bd70) on tqpair(0x1ef8a60): expected_datao=0, payload_size=4096 00:15:20.280 [2024-06-10 08:10:42.115208] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.280 [2024-06-10 08:10:42.115214] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:20.280 [2024-06-10 08:10:42.115218] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:20.280 [2024-06-10 08:10:42.115226] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.280 [2024-06-10 08:10:42.115232] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.280 [2024-06-10 08:10:42.115236] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.280 [2024-06-10 08:10:42.115240] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3bd70) on tqpair=0x1ef8a60 00:15:20.280 [2024-06-10 08:10:42.115253] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:20.280 [2024-06-10 08:10:42.115263] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:15:20.280 [2024-06-10 08:10:42.115272] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:15:20.280 [2024-06-10 08:10:42.115278] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:20.280 [2024-06-10 08:10:42.115284] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:15:20.280 [2024-06-10 08:10:42.115289] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:15:20.280 [2024-06-10 08:10:42.115293] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:15:20.280 [2024-06-10 08:10:42.115298] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:15:20.280 [2024-06-10 08:10:42.115317] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.280 [2024-06-10 08:10:42.115322] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef8a60) 00:15:20.280 [2024-06-10 08:10:42.115330] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.280 [2024-06-10 08:10:42.115337] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.280 [2024-06-10 08:10:42.115341] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.280 [2024-06-10 08:10:42.115345] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ef8a60) 00:15:20.280 [2024-06-10 08:10:42.115351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.280 [2024-06-10 08:10:42.115374] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3bd70, cid 4, qid 0 00:15:20.280 [2024-06-10 08:10:42.115381] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3bed0, cid 5, qid 0 00:15:20.280 [2024-06-10 08:10:42.115449] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.280 [2024-06-10 08:10:42.115456] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.280 [2024-06-10 08:10:42.115459] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.280 [2024-06-10 08:10:42.115463] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3bd70) on tqpair=0x1ef8a60 00:15:20.280 [2024-06-10 08:10:42.115471] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.280 [2024-06-10 08:10:42.115477] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.280 [2024-06-10 08:10:42.115480] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.280 [2024-06-10 08:10:42.115484] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3bed0) on tqpair=0x1ef8a60 00:15:20.280 [2024-06-10 08:10:42.115495] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.280 [2024-06-10 08:10:42.115500] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ef8a60) 00:15:20.280 [2024-06-10 08:10:42.115507] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.280 [2024-06-10 08:10:42.115526] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3bed0, cid 5, qid 0 00:15:20.280 [2024-06-10 08:10:42.115573] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.280 [2024-06-10 08:10:42.115580] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.280 [2024-06-10 08:10:42.115584] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.280 [2024-06-10 08:10:42.115587] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3bed0) on tqpair=0x1ef8a60 00:15:20.281 [2024-06-10 08:10:42.115599] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.115604] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ef8a60) 00:15:20.281 [2024-06-10 08:10:42.115611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.281 [2024-06-10 08:10:42.115629] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3bed0, cid 5, qid 0 00:15:20.281 [2024-06-10 08:10:42.115678] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.281 [2024-06-10 08:10:42.115685] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.281 [2024-06-10 08:10:42.115689] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.115692] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3bed0) on tqpair=0x1ef8a60 00:15:20.281 [2024-06-10 08:10:42.115704] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.115709] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ef8a60) 00:15:20.281 [2024-06-10 08:10:42.115716] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.281 [2024-06-10 08:10:42.115734] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3bed0, cid 5, qid 0 00:15:20.281 [2024-06-10 08:10:42.115779] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.281 [2024-06-10 08:10:42.115799] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.281 [2024-06-10 08:10:42.115803] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.115807] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3bed0) on tqpair=0x1ef8a60 00:15:20.281 [2024-06-10 08:10:42.115822] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.115827] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ef8a60) 00:15:20.281 [2024-06-10 08:10:42.115835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.281 [2024-06-10 08:10:42.115842] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.115847] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef8a60) 00:15:20.281 [2024-06-10 08:10:42.115853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.281 [2024-06-10 08:10:42.115860] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.115864] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1ef8a60) 00:15:20.281 [2024-06-10 08:10:42.115870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.281 [2024-06-10 08:10:42.115882] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.115888] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ef8a60) 00:15:20.281 [2024-06-10 08:10:42.115894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.281 [2024-06-10 08:10:42.115914] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3bed0, cid 5, qid 0 00:15:20.281 [2024-06-10 08:10:42.115922] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3bd70, cid 4, qid 0 00:15:20.281 [2024-06-10 08:10:42.115927] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3c030, cid 6, qid 0 00:15:20.281 [2024-06-10 08:10:42.115932] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3c190, cid 7, qid 0 00:15:20.281 [2024-06-10 08:10:42.116061] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:20.281 [2024-06-10 08:10:42.116068] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:20.281 [2024-06-10 08:10:42.116071] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.116075] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef8a60): datao=0, datal=8192, cccid=5 00:15:20.281 [2024-06-10 08:10:42.116079] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f3bed0) on tqpair(0x1ef8a60): expected_datao=0, payload_size=8192 00:15:20.281 [2024-06-10 08:10:42.116084] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.116100] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.116105] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.116111] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:20.281 [2024-06-10 08:10:42.116117] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:20.281 [2024-06-10 08:10:42.116120] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.116124] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef8a60): datao=0, datal=512, cccid=4 00:15:20.281 [2024-06-10 08:10:42.116128] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f3bd70) on tqpair(0x1ef8a60): expected_datao=0, payload_size=512 00:15:20.281 [2024-06-10 08:10:42.116132] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.116139] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.116142] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.116148] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:20.281 [2024-06-10 08:10:42.116153] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:20.281 [2024-06-10 08:10:42.116156] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.116160] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef8a60): datao=0, datal=512, cccid=6 00:15:20.281 [2024-06-10 08:10:42.116164] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f3c030) on tqpair(0x1ef8a60): expected_datao=0, payload_size=512 00:15:20.281 [2024-06-10 08:10:42.116168] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.116174] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.116178] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.116183] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:20.281 [2024-06-10 08:10:42.116189] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:20.281 [2024-06-10 08:10:42.116192] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.116195] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef8a60): datao=0, datal=4096, cccid=7 00:15:20.281 [2024-06-10 08:10:42.116200] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f3c190) on tqpair(0x1ef8a60): expected_datao=0, payload_size=4096 00:15:20.281 [2024-06-10 08:10:42.116204] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.116210] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.116214] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.116222] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.281 [2024-06-10 08:10:42.116228] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.281 [2024-06-10 08:10:42.116231] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.116235] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3bed0) on tqpair=0x1ef8a60 00:15:20.281 [2024-06-10 08:10:42.116253] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.281 [2024-06-10 08:10:42.116261] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.281 [2024-06-10 08:10:42.116264] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.281 [2024-06-10 08:10:42.116268] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3bd70) on tqpair=0x1ef8a60 00:15:20.281 ===================================================== 00:15:20.281 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:20.282 ===================================================== 00:15:20.282 Controller Capabilities/Features 00:15:20.282 ================================ 00:15:20.282 Vendor ID: 8086 00:15:20.282 Subsystem Vendor ID: 8086 00:15:20.282 Serial Number: SPDK00000000000001 00:15:20.282 Model Number: SPDK bdev Controller 00:15:20.282 Firmware Version: 24.09 00:15:20.282 Recommended Arb Burst: 6 00:15:20.282 IEEE OUI Identifier: e4 d2 5c 00:15:20.282 Multi-path I/O 00:15:20.282 May have multiple subsystem ports: Yes 00:15:20.282 May have multiple controllers: Yes 00:15:20.282 Associated with SR-IOV VF: No 00:15:20.282 Max Data Transfer Size: 131072 00:15:20.282 Max Number of Namespaces: 32 00:15:20.282 Max Number of I/O Queues: 127 00:15:20.282 NVMe Specification Version (VS): 1.3 00:15:20.282 NVMe Specification Version (Identify): 1.3 00:15:20.282 Maximum Queue Entries: 128 00:15:20.282 Contiguous Queues Required: Yes 00:15:20.282 Arbitration Mechanisms Supported 00:15:20.282 Weighted Round Robin: Not Supported 00:15:20.282 Vendor Specific: Not Supported 00:15:20.282 Reset Timeout: 15000 ms 00:15:20.282 Doorbell Stride: 4 bytes 00:15:20.282 NVM Subsystem Reset: Not Supported 00:15:20.282 Command Sets Supported 00:15:20.282 NVM Command Set: Supported 00:15:20.282 Boot Partition: Not Supported 00:15:20.282 Memory Page Size Minimum: 4096 bytes 00:15:20.282 Memory Page Size Maximum: 4096 bytes 00:15:20.282 Persistent Memory Region: Not Supported 00:15:20.282 Optional Asynchronous Events Supported 00:15:20.282 Namespace Attribute Notices: Supported 00:15:20.282 Firmware Activation Notices: Not Supported 00:15:20.282 ANA Change Notices: Not Supported 00:15:20.282 PLE Aggregate Log Change Notices: Not Supported 00:15:20.282 LBA Status Info Alert Notices: Not Supported 00:15:20.282 EGE Aggregate Log Change Notices: Not Supported 00:15:20.282 Normal NVM Subsystem Shutdown event: Not Supported 00:15:20.282 Zone Descriptor Change Notices: Not Supported 00:15:20.282 Discovery Log Change Notices: Not Supported 00:15:20.282 Controller Attributes 00:15:20.282 128-bit Host Identifier: Supported 00:15:20.282 Non-Operational Permissive Mode: Not Supported 00:15:20.282 NVM Sets: Not Supported 00:15:20.282 Read Recovery Levels: Not Supported 00:15:20.282 Endurance Groups: Not Supported 00:15:20.282 Predictable Latency Mode: Not Supported 00:15:20.282 Traffic Based Keep ALive: Not Supported 00:15:20.282 Namespace Granularity: Not Supported 00:15:20.282 SQ Associations: Not Supported 00:15:20.282 UUID List: Not Supported 00:15:20.282 Multi-Domain Subsystem: Not Supported 00:15:20.282 Fixed Capacity Management: Not Supported 00:15:20.282 Variable Capacity Management: Not Supported 00:15:20.282 Delete Endurance Group: Not Supported 00:15:20.282 Delete NVM Set: Not Supported 00:15:20.282 Extended LBA Formats Supported: Not Supported 00:15:20.282 Flexible Data Placement Supported: Not Supported 00:15:20.282 00:15:20.282 Controller Memory Buffer Support 00:15:20.282 ================================ 00:15:20.282 Supported: No 00:15:20.282 00:15:20.282 Persistent Memory Region Support 00:15:20.282 ================================ 00:15:20.282 Supported: No 00:15:20.282 00:15:20.282 Admin Command Set Attributes 00:15:20.282 ============================ 00:15:20.282 Security Send/Receive: Not Supported 00:15:20.282 Format NVM: Not Supported 00:15:20.282 Firmware Activate/Download: Not Supported 00:15:20.282 Namespace Management: Not Supported 00:15:20.282 Device Self-Test: Not Supported 00:15:20.282 Directives: Not Supported 00:15:20.282 NVMe-MI: Not Supported 00:15:20.282 Virtualization Management: Not Supported 00:15:20.282 Doorbell Buffer Config: Not Supported 00:15:20.282 Get LBA Status Capability: Not Supported 00:15:20.282 Command & Feature Lockdown Capability: Not Supported 00:15:20.282 Abort Command Limit: 4 00:15:20.282 Async Event Request Limit: 4 00:15:20.282 Number of Firmware Slots: N/A 00:15:20.282 Firmware Slot 1 Read-Only: N/A 00:15:20.282 Firmware Activation Without Reset: N/A 00:15:20.282 Multiple Update Detection Support: N/A 00:15:20.282 Firmware Update Granularity: No Information Provided 00:15:20.282 Per-Namespace SMART Log: No 00:15:20.282 Asymmetric Namespace Access Log Page: Not Supported 00:15:20.282 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:20.282 Command Effects Log Page: Supported 00:15:20.282 Get Log Page Extended Data: Supported 00:15:20.282 Telemetry Log Pages: Not Supported 00:15:20.282 Persistent Event Log Pages: Not Supported 00:15:20.282 Supported Log Pages Log Page: May Support 00:15:20.282 Commands Supported & Effects Log Page: Not Supported 00:15:20.282 Feature Identifiers & Effects Log Page:May Support 00:15:20.282 NVMe-MI Commands & Effects Log Page: May Support 00:15:20.282 Data Area 4 for Telemetry Log: Not Supported 00:15:20.282 Error Log Page Entries Supported: 128 00:15:20.282 Keep Alive: Supported 00:15:20.282 Keep Alive Granularity: 10000 ms 00:15:20.282 00:15:20.282 NVM Command Set Attributes 00:15:20.282 ========================== 00:15:20.282 Submission Queue Entry Size 00:15:20.282 Max: 64 00:15:20.282 Min: 64 00:15:20.282 Completion Queue Entry Size 00:15:20.282 Max: 16 00:15:20.282 Min: 16 00:15:20.282 Number of Namespaces: 32 00:15:20.282 Compare Command: Supported 00:15:20.282 Write Uncorrectable Command: Not Supported 00:15:20.282 Dataset Management Command: Supported 00:15:20.282 Write Zeroes Command: Supported 00:15:20.282 Set Features Save Field: Not Supported 00:15:20.282 Reservations: Supported 00:15:20.282 Timestamp: Not Supported 00:15:20.282 Copy: Supported 00:15:20.282 Volatile Write Cache: Present 00:15:20.282 Atomic Write Unit (Normal): 1 00:15:20.282 Atomic Write Unit (PFail): 1 00:15:20.282 Atomic Compare & Write Unit: 1 00:15:20.282 Fused Compare & Write: Supported 00:15:20.283 Scatter-Gather List 00:15:20.283 SGL Command Set: Supported 00:15:20.283 SGL Keyed: Supported 00:15:20.283 SGL Bit Bucket Descriptor: Not Supported 00:15:20.283 SGL Metadata Pointer: Not Supported 00:15:20.283 Oversized SGL: Not Supported 00:15:20.283 SGL Metadata Address: Not Supported 00:15:20.283 SGL Offset: Supported 00:15:20.283 Transport SGL Data Block: Not Supported 00:15:20.283 Replay Protected Memory Block: Not Supported 00:15:20.283 00:15:20.283 Firmware Slot Information 00:15:20.283 ========================= 00:15:20.283 Active slot: 1 00:15:20.283 Slot 1 Firmware Revision: 24.09 00:15:20.283 00:15:20.283 00:15:20.283 Commands Supported and Effects 00:15:20.283 ============================== 00:15:20.283 Admin Commands 00:15:20.283 -------------- 00:15:20.283 Get Log Page (02h): Supported 00:15:20.283 Identify (06h): Supported 00:15:20.283 Abort (08h): Supported 00:15:20.283 Set Features (09h): Supported 00:15:20.283 Get Features (0Ah): Supported 00:15:20.283 Asynchronous Event Request (0Ch): Supported 00:15:20.283 Keep Alive (18h): Supported 00:15:20.283 I/O Commands 00:15:20.283 ------------ 00:15:20.283 Flush (00h): Supported LBA-Change 00:15:20.283 Write (01h): Supported LBA-Change 00:15:20.283 Read (02h): Supported 00:15:20.283 Compare (05h): Supported 00:15:20.283 Write Zeroes (08h): Supported LBA-Change 00:15:20.283 Dataset Management (09h): Supported LBA-Change 00:15:20.283 Copy (19h): Supported LBA-Change 00:15:20.283 Unknown (79h): Supported LBA-Change 00:15:20.283 Unknown (7Ah): Supported 00:15:20.283 00:15:20.283 Error Log 00:15:20.283 ========= 00:15:20.283 00:15:20.283 Arbitration 00:15:20.283 =========== 00:15:20.283 Arbitration Burst: 1 00:15:20.283 00:15:20.283 Power Management 00:15:20.283 ================ 00:15:20.283 Number of Power States: 1 00:15:20.283 Current Power State: Power State #0 00:15:20.283 Power State #0: 00:15:20.283 Max Power: 0.00 W 00:15:20.283 Non-Operational State: Operational 00:15:20.283 Entry Latency: Not Reported 00:15:20.283 Exit Latency: Not Reported 00:15:20.283 Relative Read Throughput: 0 00:15:20.283 Relative Read Latency: 0 00:15:20.283 Relative Write Throughput: 0 00:15:20.283 Relative Write Latency: 0 00:15:20.283 Idle Power: Not Reported 00:15:20.283 Active Power: Not Reported 00:15:20.283 Non-Operational Permissive Mode: Not Supported 00:15:20.283 00:15:20.283 Health Information 00:15:20.283 ================== 00:15:20.283 Critical Warnings: 00:15:20.283 Available Spare Space: OK 00:15:20.283 Temperature: OK 00:15:20.283 Device Reliability: OK 00:15:20.283 Read Only: No 00:15:20.283 Volatile Memory Backup: OK 00:15:20.283 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:20.283 Temperature Threshold: [2024-06-10 08:10:42.116279] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.283 [2024-06-10 08:10:42.116285] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.283 [2024-06-10 08:10:42.116289] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.283 [2024-06-10 08:10:42.116292] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3c030) on tqpair=0x1ef8a60 00:15:20.283 [2024-06-10 08:10:42.116303] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.283 [2024-06-10 08:10:42.116309] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.283 [2024-06-10 08:10:42.116312] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.283 [2024-06-10 08:10:42.116316] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3c190) on tqpair=0x1ef8a60 00:15:20.283 [2024-06-10 08:10:42.116418] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.283 [2024-06-10 08:10:42.116426] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ef8a60) 00:15:20.283 [2024-06-10 08:10:42.116433] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.283 [2024-06-10 08:10:42.116481] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3c190, cid 7, qid 0 00:15:20.283 [2024-06-10 08:10:42.116533] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.283 [2024-06-10 08:10:42.116540] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.283 [2024-06-10 08:10:42.116544] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.284 [2024-06-10 08:10:42.116548] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3c190) on tqpair=0x1ef8a60 00:15:20.284 [2024-06-10 08:10:42.116585] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:15:20.284 [2024-06-10 08:10:42.116600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.284 [2024-06-10 08:10:42.116608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.284 [2024-06-10 08:10:42.116614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.284 [2024-06-10 08:10:42.116621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.284 [2024-06-10 08:10:42.116630] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.284 [2024-06-10 08:10:42.116635] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.284 [2024-06-10 08:10:42.116639] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef8a60) 00:15:20.284 [2024-06-10 08:10:42.116647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.284 [2024-06-10 08:10:42.116669] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3bc10, cid 3, qid 0 00:15:20.284 [2024-06-10 08:10:42.116719] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.284 [2024-06-10 08:10:42.116726] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.284 [2024-06-10 08:10:42.116730] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.284 [2024-06-10 08:10:42.116734] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3bc10) on tqpair=0x1ef8a60 00:15:20.284 [2024-06-10 08:10:42.116743] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.284 [2024-06-10 08:10:42.116748] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.284 [2024-06-10 08:10:42.116752] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef8a60) 00:15:20.284 [2024-06-10 08:10:42.116760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.284 [2024-06-10 08:10:42.116812] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3bc10, cid 3, qid 0 00:15:20.284 [2024-06-10 08:10:42.120833] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.284 [2024-06-10 08:10:42.120854] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.284 [2024-06-10 08:10:42.120860] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.284 [2024-06-10 08:10:42.120864] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3bc10) on tqpair=0x1ef8a60 00:15:20.284 [2024-06-10 08:10:42.120871] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:15:20.284 [2024-06-10 08:10:42.120876] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:15:20.284 [2024-06-10 08:10:42.120889] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.284 [2024-06-10 08:10:42.120895] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.284 [2024-06-10 08:10:42.120899] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef8a60) 00:15:20.284 [2024-06-10 08:10:42.120907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.284 [2024-06-10 08:10:42.120933] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3bc10, cid 3, qid 0 00:15:20.284 [2024-06-10 08:10:42.120988] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.284 [2024-06-10 08:10:42.121002] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.284 [2024-06-10 08:10:42.121006] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.284 [2024-06-10 08:10:42.121010] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f3bc10) on tqpair=0x1ef8a60 00:15:20.284 [2024-06-10 08:10:42.121019] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:15:20.687 0 Kelvin (-273 Celsius) 00:15:20.687 Available Spare: 0% 00:15:20.687 Available Spare Threshold: 0% 00:15:20.687 Life Percentage Used: 0% 00:15:20.687 Data Units Read: 0 00:15:20.687 Data Units Written: 0 00:15:20.687 Host Read Commands: 0 00:15:20.687 Host Write Commands: 0 00:15:20.687 Controller Busy Time: 0 minutes 00:15:20.687 Power Cycles: 0 00:15:20.687 Power On Hours: 0 hours 00:15:20.687 Unsafe Shutdowns: 0 00:15:20.687 Unrecoverable Media Errors: 0 00:15:20.687 Lifetime Error Log Entries: 0 00:15:20.687 Warning Temperature Time: 0 minutes 00:15:20.687 Critical Temperature Time: 0 minutes 00:15:20.687 00:15:20.687 Number of Queues 00:15:20.687 ================ 00:15:20.687 Number of I/O Submission Queues: 127 00:15:20.687 Number of I/O Completion Queues: 127 00:15:20.687 00:15:20.687 Active Namespaces 00:15:20.687 ================= 00:15:20.687 Namespace ID:1 00:15:20.687 Error Recovery Timeout: Unlimited 00:15:20.687 Command Set Identifier: NVM (00h) 00:15:20.687 Deallocate: Supported 00:15:20.687 Deallocated/Unwritten Error: Not Supported 00:15:20.687 Deallocated Read Value: Unknown 00:15:20.687 Deallocate in Write Zeroes: Not Supported 00:15:20.687 Deallocated Guard Field: 0xFFFF 00:15:20.687 Flush: Supported 00:15:20.687 Reservation: Supported 00:15:20.687 Namespace Sharing Capabilities: Multiple Controllers 00:15:20.687 Size (in LBAs): 131072 (0GiB) 00:15:20.687 Capacity (in LBAs): 131072 (0GiB) 00:15:20.687 Utilization (in LBAs): 131072 (0GiB) 00:15:20.687 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:20.687 EUI64: ABCDEF0123456789 00:15:20.687 UUID: 43843df3-6407-4354-b088-7b95ca8915ce 00:15:20.687 Thin Provisioning: Not Supported 00:15:20.687 Per-NS Atomic Units: Yes 00:15:20.687 Atomic Boundary Size (Normal): 0 00:15:20.687 Atomic Boundary Size (PFail): 0 00:15:20.687 Atomic Boundary Offset: 0 00:15:20.687 Maximum Single Source Range Length: 65535 00:15:20.687 Maximum Copy Length: 65535 00:15:20.687 Maximum Source Range Count: 1 00:15:20.687 NGUID/EUI64 Never Reused: No 00:15:20.687 Namespace Write Protected: No 00:15:20.687 Number of LBA Formats: 1 00:15:20.687 Current LBA Format: LBA Format #00 00:15:20.687 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:20.687 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:20.687 rmmod nvme_tcp 00:15:20.687 rmmod nvme_fabrics 00:15:20.687 rmmod nvme_keyring 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 74785 ']' 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 74785 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 74785 ']' 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 74785 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 74785 00:15:20.687 killing process with pid 74785 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 74785' 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@968 -- # kill 74785 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@973 -- # wait 74785 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:20.687 08:10:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:20.688 08:10:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:20.688 08:10:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.688 08:10:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.688 08:10:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.958 08:10:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:20.958 ************************************ 00:15:20.958 END TEST nvmf_identify 00:15:20.958 ************************************ 00:15:20.958 00:15:20.958 real 0m2.502s 00:15:20.959 user 0m7.003s 00:15:20.959 sys 0m0.614s 00:15:20.959 08:10:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:20.959 08:10:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:20.959 08:10:42 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:20.959 08:10:42 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:20.959 08:10:42 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:20.959 08:10:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:20.959 ************************************ 00:15:20.959 START TEST nvmf_perf 00:15:20.959 ************************************ 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:20.959 * Looking for test storage... 00:15:20.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:20.959 Cannot find device "nvmf_tgt_br" 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:20.959 Cannot find device "nvmf_tgt_br2" 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:20.959 Cannot find device "nvmf_tgt_br" 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:20.959 Cannot find device "nvmf_tgt_br2" 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:20.959 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:21.219 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:21.219 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:21.219 08:10:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:21.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:21.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:15:21.219 00:15:21.219 --- 10.0.0.2 ping statistics --- 00:15:21.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.219 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:21.219 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:21.219 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:15:21.219 00:15:21.219 --- 10.0.0.3 ping statistics --- 00:15:21.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.219 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:21.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:21.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:21.219 00:15:21.219 --- 10.0.0.1 ping statistics --- 00:15:21.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.219 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:21.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=74993 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 74993 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 74993 ']' 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:21.219 08:10:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:21.478 [2024-06-10 08:10:43.135358] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:15:21.478 [2024-06-10 08:10:43.135460] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.478 [2024-06-10 08:10:43.277682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:21.738 [2024-06-10 08:10:43.397591] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.738 [2024-06-10 08:10:43.397666] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.738 [2024-06-10 08:10:43.397680] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.738 [2024-06-10 08:10:43.397691] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.738 [2024-06-10 08:10:43.397700] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.738 [2024-06-10 08:10:43.397870] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.738 [2024-06-10 08:10:43.398433] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:21.738 [2024-06-10 08:10:43.398644] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:21.738 [2024-06-10 08:10:43.398654] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.738 [2024-06-10 08:10:43.457658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:22.306 08:10:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:22.306 08:10:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:15:22.306 08:10:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:22.306 08:10:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:22.306 08:10:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:22.565 08:10:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.565 08:10:44 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:22.565 08:10:44 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:22.825 08:10:44 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:22.825 08:10:44 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:23.083 08:10:44 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:23.083 08:10:44 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:23.341 08:10:45 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:23.341 08:10:45 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:23.341 08:10:45 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:23.341 08:10:45 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:23.341 08:10:45 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:23.600 [2024-06-10 08:10:45.377299] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.600 08:10:45 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:23.858 08:10:45 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:23.858 08:10:45 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:24.116 08:10:45 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:24.116 08:10:45 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:24.374 08:10:46 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.633 [2024-06-10 08:10:46.416387] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.633 08:10:46 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:24.891 08:10:46 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:24.891 08:10:46 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:24.891 08:10:46 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:24.891 08:10:46 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:26.266 Initializing NVMe Controllers 00:15:26.266 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:26.266 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:26.266 Initialization complete. Launching workers. 00:15:26.266 ======================================================== 00:15:26.266 Latency(us) 00:15:26.266 Device Information : IOPS MiB/s Average min max 00:15:26.266 PCIE (0000:00:10.0) NSID 1 from core 0: 22869.30 89.33 1398.71 336.33 8889.02 00:15:26.266 ======================================================== 00:15:26.266 Total : 22869.30 89.33 1398.71 336.33 8889.02 00:15:26.266 00:15:26.266 08:10:47 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:27.201 Initializing NVMe Controllers 00:15:27.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:27.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:27.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:27.201 Initialization complete. Launching workers. 00:15:27.201 ======================================================== 00:15:27.201 Latency(us) 00:15:27.201 Device Information : IOPS MiB/s Average min max 00:15:27.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3108.99 12.14 321.36 110.01 7275.61 00:15:27.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8184.97 5881.44 12061.96 00:15:27.201 ======================================================== 00:15:27.201 Total : 3231.99 12.62 620.62 110.01 12061.96 00:15:27.201 00:15:27.460 08:10:49 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:28.859 Initializing NVMe Controllers 00:15:28.859 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:28.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:28.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:28.859 Initialization complete. Launching workers. 00:15:28.859 ======================================================== 00:15:28.859 Latency(us) 00:15:28.859 Device Information : IOPS MiB/s Average min max 00:15:28.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8832.57 34.50 3623.15 609.99 11284.68 00:15:28.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3913.95 15.29 8216.68 6046.13 29046.94 00:15:28.859 ======================================================== 00:15:28.859 Total : 12746.52 49.79 5033.64 609.99 29046.94 00:15:28.859 00:15:28.859 08:10:50 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:28.859 08:10:50 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:31.393 Initializing NVMe Controllers 00:15:31.393 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:31.393 Controller IO queue size 128, less than required. 00:15:31.393 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:31.393 Controller IO queue size 128, less than required. 00:15:31.393 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:31.393 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:31.393 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:31.393 Initialization complete. Launching workers. 00:15:31.393 ======================================================== 00:15:31.393 Latency(us) 00:15:31.393 Device Information : IOPS MiB/s Average min max 00:15:31.393 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1629.11 407.28 79432.81 40913.57 140903.69 00:15:31.393 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 636.65 159.16 208080.82 66334.37 345052.61 00:15:31.393 ======================================================== 00:15:31.393 Total : 2265.77 566.44 115581.36 40913.57 345052.61 00:15:31.393 00:15:31.393 08:10:53 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:15:31.652 Initializing NVMe Controllers 00:15:31.652 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:31.652 Controller IO queue size 128, less than required. 00:15:31.652 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:31.652 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:31.652 Controller IO queue size 128, less than required. 00:15:31.652 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:31.652 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:31.652 WARNING: Some requested NVMe devices were skipped 00:15:31.652 No valid NVMe controllers or AIO or URING devices found 00:15:31.652 08:10:53 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:15:34.187 Initializing NVMe Controllers 00:15:34.187 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:34.187 Controller IO queue size 128, less than required. 00:15:34.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:34.187 Controller IO queue size 128, less than required. 00:15:34.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:34.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:34.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:34.187 Initialization complete. Launching workers. 00:15:34.187 00:15:34.187 ==================== 00:15:34.187 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:34.187 TCP transport: 00:15:34.187 polls: 8343 00:15:34.187 idle_polls: 5128 00:15:34.187 sock_completions: 3215 00:15:34.187 nvme_completions: 5517 00:15:34.187 submitted_requests: 8166 00:15:34.187 queued_requests: 1 00:15:34.187 00:15:34.187 ==================== 00:15:34.187 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:34.187 TCP transport: 00:15:34.187 polls: 10645 00:15:34.187 idle_polls: 7214 00:15:34.187 sock_completions: 3431 00:15:34.187 nvme_completions: 5981 00:15:34.187 submitted_requests: 8982 00:15:34.187 queued_requests: 1 00:15:34.187 ======================================================== 00:15:34.187 Latency(us) 00:15:34.187 Device Information : IOPS MiB/s Average min max 00:15:34.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1378.59 344.65 95400.64 56896.49 141371.07 00:15:34.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1494.56 373.64 85764.72 40195.32 127309.92 00:15:34.187 ======================================================== 00:15:34.187 Total : 2873.15 718.29 90388.22 40195.32 141371.07 00:15:34.187 00:15:34.187 08:10:55 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:34.187 08:10:55 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:34.446 rmmod nvme_tcp 00:15:34.446 rmmod nvme_fabrics 00:15:34.446 rmmod nvme_keyring 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 74993 ']' 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 74993 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 74993 ']' 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 74993 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 74993 00:15:34.446 killing process with pid 74993 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 74993' 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@968 -- # kill 74993 00:15:34.446 08:10:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@973 -- # wait 74993 00:15:35.383 08:10:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:35.383 08:10:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:35.383 08:10:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:35.383 08:10:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:35.383 08:10:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:35.383 08:10:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.383 08:10:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.383 08:10:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.383 08:10:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:35.383 ************************************ 00:15:35.383 END TEST nvmf_perf 00:15:35.383 ************************************ 00:15:35.383 00:15:35.383 real 0m14.427s 00:15:35.383 user 0m53.287s 00:15:35.383 sys 0m4.019s 00:15:35.383 08:10:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:35.383 08:10:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:35.383 08:10:57 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:35.383 08:10:57 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:35.383 08:10:57 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:35.383 08:10:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:35.384 ************************************ 00:15:35.384 START TEST nvmf_fio_host 00:15:35.384 ************************************ 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:35.384 * Looking for test storage... 00:15:35.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.384 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.385 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:35.385 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:35.385 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:35.385 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:35.385 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:35.385 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.385 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:35.385 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:35.385 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:35.385 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:35.385 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:35.385 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:35.385 Cannot find device "nvmf_tgt_br" 00:15:35.385 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:15:35.385 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:35.385 Cannot find device "nvmf_tgt_br2" 00:15:35.385 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:15:35.385 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:35.385 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:35.644 Cannot find device "nvmf_tgt_br" 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:35.644 Cannot find device "nvmf_tgt_br2" 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:35.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:35.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:35.644 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:35.903 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:35.903 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:35.903 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:35.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:15:35.903 00:15:35.903 --- 10.0.0.2 ping statistics --- 00:15:35.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.903 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:35.903 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:35.903 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:35.903 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:15:35.903 00:15:35.903 --- 10.0.0.3 ping statistics --- 00:15:35.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.903 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:35.903 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:35.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:35.903 00:15:35.903 --- 10.0.0.1 ping statistics --- 00:15:35.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.903 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:35.903 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.903 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:15:35.903 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:35.903 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.903 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:35.903 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:35.903 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.903 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:35.903 08:10:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:35.903 08:10:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:35.903 08:10:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:35.903 08:10:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:35.903 08:10:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.904 08:10:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75395 00:15:35.904 08:10:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:35.904 08:10:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:35.904 08:10:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75395 00:15:35.904 08:10:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 75395 ']' 00:15:35.904 08:10:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.904 08:10:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:35.904 08:10:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.904 08:10:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:35.904 08:10:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.904 [2024-06-10 08:10:57.622343] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:15:35.904 [2024-06-10 08:10:57.622630] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.904 [2024-06-10 08:10:57.761881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.163 [2024-06-10 08:10:57.869644] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.163 [2024-06-10 08:10:57.869976] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.163 [2024-06-10 08:10:57.870116] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.163 [2024-06-10 08:10:57.870265] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.163 [2024-06-10 08:10:57.870301] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.163 [2024-06-10 08:10:57.870581] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.163 [2024-06-10 08:10:57.870664] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.163 [2024-06-10 08:10:57.870726] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.163 [2024-06-10 08:10:57.870726] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:36.163 [2024-06-10 08:10:57.924456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:36.771 08:10:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:36.771 08:10:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:15:36.771 08:10:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:37.030 [2024-06-10 08:10:58.781320] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.030 08:10:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:37.031 08:10:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:37.031 08:10:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.031 08:10:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:37.289 Malloc1 00:15:37.289 08:10:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:37.548 08:10:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:37.807 08:10:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:38.065 [2024-06-10 08:10:59.764721] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.065 08:10:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:38.324 08:10:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:38.324 08:10:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:38.324 08:10:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:38.324 08:10:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:15:38.324 08:10:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:38.324 08:10:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:15:38.324 08:10:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:38.324 08:10:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:15:38.324 08:10:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:15:38.324 08:10:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:15:38.324 08:10:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:15:38.324 08:10:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:38.324 08:10:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:15:38.324 08:11:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:15:38.324 08:11:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:15:38.324 08:11:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:15:38.324 08:11:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:38.324 08:11:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:15:38.324 08:11:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:15:38.324 08:11:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:15:38.324 08:11:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:15:38.324 08:11:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:38.324 08:11:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:38.324 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:38.324 fio-3.35 00:15:38.324 Starting 1 thread 00:15:40.876 00:15:40.876 test: (groupid=0, jobs=1): err= 0: pid=75475: Mon Jun 10 08:11:02 2024 00:15:40.876 read: IOPS=8985, BW=35.1MiB/s (36.8MB/s)(70.4MiB/2006msec) 00:15:40.876 slat (nsec): min=1858, max=431251, avg=2502.58, stdev=4113.88 00:15:40.876 clat (usec): min=2779, max=13362, avg=7411.31, stdev=531.51 00:15:40.876 lat (usec): min=2812, max=13364, avg=7413.82, stdev=531.27 00:15:40.876 clat percentiles (usec): 00:15:40.876 | 1.00th=[ 6259], 5.00th=[ 6587], 10.00th=[ 6783], 20.00th=[ 6980], 00:15:40.876 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7504], 00:15:40.876 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8029], 95.00th=[ 8225], 00:15:40.876 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[11207], 99.95th=[11863], 00:15:40.876 | 99.99th=[13304] 00:15:40.876 bw ( KiB/s): min=35544, max=36520, per=99.94%, avg=35918.00, stdev=446.83, samples=4 00:15:40.876 iops : min= 8886, max= 9130, avg=8979.50, stdev=111.71, samples=4 00:15:40.876 write: IOPS=9008, BW=35.2MiB/s (36.9MB/s)(70.6MiB/2006msec); 0 zone resets 00:15:40.876 slat (nsec): min=1993, max=280034, avg=2590.65, stdev=2682.53 00:15:40.876 clat (usec): min=2640, max=12482, avg=6760.26, stdev=478.53 00:15:40.876 lat (usec): min=2654, max=12485, avg=6762.85, stdev=478.39 00:15:40.876 clat percentiles (usec): 00:15:40.876 | 1.00th=[ 5735], 5.00th=[ 6063], 10.00th=[ 6194], 20.00th=[ 6390], 00:15:40.876 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6849], 00:15:40.876 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7504], 00:15:40.876 | 99.00th=[ 7767], 99.50th=[ 7898], 99.90th=[10290], 99.95th=[11338], 00:15:40.876 | 99.99th=[12387] 00:15:40.876 bw ( KiB/s): min=35656, max=36296, per=99.96%, avg=36018.00, stdev=324.46, samples=4 00:15:40.876 iops : min= 8914, max= 9074, avg=9004.50, stdev=81.12, samples=4 00:15:40.876 lat (msec) : 4=0.07%, 10=99.77%, 20=0.16% 00:15:40.876 cpu : usr=69.13%, sys=23.44%, ctx=37, majf=0, minf=5 00:15:40.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:40.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:40.876 issued rwts: total=18024,18071,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.876 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:40.876 00:15:40.876 Run status group 0 (all jobs): 00:15:40.876 READ: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.4MiB (73.8MB), run=2006-2006msec 00:15:40.876 WRITE: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=70.6MiB (74.0MB), run=2006-2006msec 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:40.877 08:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:40.877 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:40.877 fio-3.35 00:15:40.877 Starting 1 thread 00:15:43.411 00:15:43.411 test: (groupid=0, jobs=1): err= 0: pid=75522: Mon Jun 10 08:11:04 2024 00:15:43.411 read: IOPS=8396, BW=131MiB/s (138MB/s)(264MiB/2010msec) 00:15:43.411 slat (usec): min=2, max=107, avg= 3.82, stdev= 2.42 00:15:43.411 clat (usec): min=2902, max=16699, avg=8436.18, stdev=2451.09 00:15:43.411 lat (usec): min=2906, max=16703, avg=8440.00, stdev=2451.10 00:15:43.411 clat percentiles (usec): 00:15:43.411 | 1.00th=[ 4146], 5.00th=[ 4817], 10.00th=[ 5342], 20.00th=[ 6194], 00:15:43.411 | 30.00th=[ 6915], 40.00th=[ 7504], 50.00th=[ 8160], 60.00th=[ 8979], 00:15:43.411 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[11731], 95.00th=[12911], 00:15:43.411 | 99.00th=[14746], 99.50th=[15139], 99.90th=[16057], 99.95th=[16057], 00:15:43.412 | 99.99th=[16581] 00:15:43.412 bw ( KiB/s): min=62976, max=74176, per=51.02%, avg=68536.00, stdev=5815.11, samples=4 00:15:43.412 iops : min= 3936, max= 4636, avg=4283.50, stdev=363.44, samples=4 00:15:43.412 write: IOPS=4809, BW=75.2MiB/s (78.8MB/s)(140MiB/1862msec); 0 zone resets 00:15:43.412 slat (usec): min=31, max=361, avg=39.19, stdev= 9.95 00:15:43.412 clat (usec): min=7064, max=21514, avg=12190.86, stdev=2212.21 00:15:43.412 lat (usec): min=7097, max=21550, avg=12230.05, stdev=2213.38 00:15:43.412 clat percentiles (usec): 00:15:43.412 | 1.00th=[ 8094], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10290], 00:15:43.412 | 30.00th=[10814], 40.00th=[11469], 50.00th=[11863], 60.00th=[12518], 00:15:43.412 | 70.00th=[13173], 80.00th=[14091], 90.00th=[15401], 95.00th=[16188], 00:15:43.412 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19530], 99.95th=[21103], 00:15:43.412 | 99.99th=[21627] 00:15:43.412 bw ( KiB/s): min=64064, max=77536, per=92.52%, avg=71200.00, stdev=6204.76, samples=4 00:15:43.412 iops : min= 4004, max= 4846, avg=4450.00, stdev=387.80, samples=4 00:15:43.412 lat (msec) : 4=0.38%, 10=53.03%, 20=46.56%, 50=0.03% 00:15:43.412 cpu : usr=81.28%, sys=14.09%, ctx=33, majf=0, minf=12 00:15:43.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:43.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:43.412 issued rwts: total=16876,8956,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.412 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:43.412 00:15:43.412 Run status group 0 (all jobs): 00:15:43.412 READ: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=264MiB (276MB), run=2010-2010msec 00:15:43.412 WRITE: bw=75.2MiB/s (78.8MB/s), 75.2MiB/s-75.2MiB/s (78.8MB/s-78.8MB/s), io=140MiB (147MB), run=1862-1862msec 00:15:43.412 08:11:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:43.412 08:11:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:43.412 08:11:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:43.412 08:11:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:43.412 08:11:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:43.412 08:11:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:43.412 08:11:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:15:43.412 08:11:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:43.412 08:11:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:15:43.412 08:11:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:43.412 08:11:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:43.412 rmmod nvme_tcp 00:15:43.412 rmmod nvme_fabrics 00:15:43.670 rmmod nvme_keyring 00:15:43.670 08:11:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:43.670 08:11:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:15:43.670 08:11:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:15:43.670 08:11:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 75395 ']' 00:15:43.670 08:11:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 75395 00:15:43.670 08:11:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 75395 ']' 00:15:43.670 08:11:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 75395 00:15:43.670 08:11:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:15:43.670 08:11:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:43.670 08:11:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 75395 00:15:43.670 killing process with pid 75395 00:15:43.671 08:11:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:43.671 08:11:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:43.671 08:11:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 75395' 00:15:43.671 08:11:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 75395 00:15:43.671 08:11:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 75395 00:15:43.930 08:11:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:43.930 08:11:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:43.930 08:11:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:43.930 08:11:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:43.930 08:11:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:43.930 08:11:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.930 08:11:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.930 08:11:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.930 08:11:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:43.930 ************************************ 00:15:43.930 END TEST nvmf_fio_host 00:15:43.930 ************************************ 00:15:43.930 00:15:43.930 real 0m8.547s 00:15:43.930 user 0m34.701s 00:15:43.930 sys 0m2.393s 00:15:43.930 08:11:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:43.930 08:11:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.930 08:11:05 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:43.930 08:11:05 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:43.930 08:11:05 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:43.930 08:11:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:43.930 ************************************ 00:15:43.930 START TEST nvmf_failover 00:15:43.930 ************************************ 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:43.930 * Looking for test storage... 00:15:43.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.930 08:11:05 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:43.931 08:11:05 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.931 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:15:43.931 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:43.931 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:43.931 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.931 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.931 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.931 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:43.931 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:43.931 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:43.931 08:11:05 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:43.931 08:11:05 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:43.931 08:11:05 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:43.931 08:11:05 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:43.931 08:11:05 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:43.931 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:43.931 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.931 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:44.190 Cannot find device "nvmf_tgt_br" 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:44.190 Cannot find device "nvmf_tgt_br2" 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:44.190 Cannot find device "nvmf_tgt_br" 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:44.190 Cannot find device "nvmf_tgt_br2" 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:44.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:44.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:44.190 08:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:44.190 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:44.190 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:44.190 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:44.190 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:44.190 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:44.190 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:44.190 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:44.190 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:44.190 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:44.190 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:44.191 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:44.191 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:44.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:44.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:15:44.450 00:15:44.450 --- 10.0.0.2 ping statistics --- 00:15:44.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.450 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:44.450 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:44.450 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:15:44.450 00:15:44.450 --- 10.0.0.3 ping statistics --- 00:15:44.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.450 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:44.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:44.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:15:44.450 00:15:44.450 --- 10.0.0.1 ping statistics --- 00:15:44.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.450 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=75740 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 75740 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 75740 ']' 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:44.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.450 08:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.451 08:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:44.451 08:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:44.451 [2024-06-10 08:11:06.215718] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:15:44.451 [2024-06-10 08:11:06.215838] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.710 [2024-06-10 08:11:06.359564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:44.710 [2024-06-10 08:11:06.474655] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.710 [2024-06-10 08:11:06.474723] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.710 [2024-06-10 08:11:06.474738] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.710 [2024-06-10 08:11:06.474749] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.710 [2024-06-10 08:11:06.474758] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.710 [2024-06-10 08:11:06.474914] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.710 [2024-06-10 08:11:06.475590] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:44.710 [2024-06-10 08:11:06.475628] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.710 [2024-06-10 08:11:06.534160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:45.648 08:11:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:45.648 08:11:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:15:45.648 08:11:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:45.648 08:11:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:45.648 08:11:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:45.648 08:11:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:45.648 08:11:07 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:45.648 [2024-06-10 08:11:07.414773] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:45.648 08:11:07 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:45.907 Malloc0 00:15:45.907 08:11:07 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:46.166 08:11:07 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:46.426 08:11:08 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.685 [2024-06-10 08:11:08.413101] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.685 08:11:08 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:46.944 [2024-06-10 08:11:08.637297] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:46.944 08:11:08 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:47.203 [2024-06-10 08:11:08.865591] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:47.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:47.203 08:11:08 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75797 00:15:47.203 08:11:08 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:47.203 08:11:08 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75797 /var/tmp/bdevperf.sock 00:15:47.203 08:11:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 75797 ']' 00:15:47.203 08:11:08 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:47.203 08:11:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:47.203 08:11:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:47.203 08:11:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:47.203 08:11:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:47.203 08:11:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:48.139 08:11:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:48.139 08:11:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:15:48.139 08:11:09 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:48.398 NVMe0n1 00:15:48.657 08:11:10 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:48.915 00:15:48.915 08:11:10 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75821 00:15:48.915 08:11:10 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:48.915 08:11:10 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:49.852 08:11:11 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.111 08:11:11 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:53.396 08:11:14 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:53.396 00:15:53.396 08:11:15 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:53.655 08:11:15 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:56.978 08:11:18 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.978 [2024-06-10 08:11:18.729453] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.978 08:11:18 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:57.914 08:11:19 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:58.173 08:11:20 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 75821 00:16:04.747 0 00:16:04.747 08:11:25 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 75797 00:16:04.747 08:11:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 75797 ']' 00:16:04.747 08:11:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 75797 00:16:04.747 08:11:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:16:04.747 08:11:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:04.747 08:11:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 75797 00:16:04.747 killing process with pid 75797 00:16:04.747 08:11:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:04.747 08:11:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:04.747 08:11:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 75797' 00:16:04.747 08:11:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 75797 00:16:04.747 08:11:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 75797 00:16:04.747 08:11:26 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:04.747 [2024-06-10 08:11:08.943057] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:16:04.747 [2024-06-10 08:11:08.943209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75797 ] 00:16:04.747 [2024-06-10 08:11:09.085829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.747 [2024-06-10 08:11:09.202634] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.747 [2024-06-10 08:11:09.262195] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:04.747 Running I/O for 15 seconds... 00:16:04.747 [2024-06-10 08:11:11.846441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.747 [2024-06-10 08:11:11.846519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.747 [2024-06-10 08:11:11.846564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.747 [2024-06-10 08:11:11.846586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.747 [2024-06-10 08:11:11.846609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.747 [2024-06-10 08:11:11.846628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.747 [2024-06-10 08:11:11.846648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.747 [2024-06-10 08:11:11.846667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.747 [2024-06-10 08:11:11.846688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.747 [2024-06-10 08:11:11.846706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.747 [2024-06-10 08:11:11.846727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.747 [2024-06-10 08:11:11.846746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.747 [2024-06-10 08:11:11.846766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.747 [2024-06-10 08:11:11.846811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.747 [2024-06-10 08:11:11.846834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.747 [2024-06-10 08:11:11.846853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.747 [2024-06-10 08:11:11.846874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.747 [2024-06-10 08:11:11.846892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.747 [2024-06-10 08:11:11.846913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.747 [2024-06-10 08:11:11.846932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.747 [2024-06-10 08:11:11.846953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.747 [2024-06-10 08:11:11.847005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.747 [2024-06-10 08:11:11.847029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.747 [2024-06-10 08:11:11.847048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.747 [2024-06-10 08:11:11.847069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.747 [2024-06-10 08:11:11.847087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.747 [2024-06-10 08:11:11.847107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.747 [2024-06-10 08:11:11.847125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.747 [2024-06-10 08:11:11.847146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.747 [2024-06-10 08:11:11.847164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.747 [2024-06-10 08:11:11.847195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.747 [2024-06-10 08:11:11.847213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.847233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.847252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.847283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.847302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.847322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.847341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.847361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.847379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.847400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.847418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.847438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.847456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.847477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.847495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.847515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.847543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.847564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.847583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.847603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.847622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.847643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.847661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.847681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.847700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.847720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.847738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.847759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.847777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.847812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.847840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.847861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.847879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.847899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.847917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.847945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.847964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.847984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.848003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.848023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.848041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.848070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.848089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.848110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.848128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.848148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.848167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.848187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.848213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.848234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.848252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.848272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.848290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.848311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.848329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.848349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.848367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.848387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.848405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.848426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.848444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.848464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.848482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.848502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.848521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.848581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.848610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.848637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.848656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.848676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.848695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.748 [2024-06-10 08:11:11.848715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.748 [2024-06-10 08:11:11.848733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.848754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.848772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.848807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.848827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.848848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.848877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.848897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.848915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.848935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.848953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.848973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.848999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.849961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.849980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.850000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.850018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.850038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.850056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.850077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.850094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.850115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.850133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.850154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.850184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.850205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.850224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.850244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.749 [2024-06-10 08:11:11.850262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.749 [2024-06-10 08:11:11.850282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.850300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.850320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.850338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.850359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.850377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.850397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.850415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.850435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.850459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.850479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.850498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.850518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.850537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.850557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.850575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.850601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.850620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.850640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.850658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.850686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.850705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.850725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.850743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.850763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.850799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.850823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.850841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.850861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.850880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.850900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.850918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.850938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.850956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.850976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.850994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.851015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.851032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.851053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.851071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.851091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.851116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.851136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.851155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.851176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.750 [2024-06-10 08:11:11.851194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.851223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.750 [2024-06-10 08:11:11.851242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.851268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.750 [2024-06-10 08:11:11.851286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.851307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.750 [2024-06-10 08:11:11.851325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.851345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.750 [2024-06-10 08:11:11.851363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.851384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.750 [2024-06-10 08:11:11.851402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.851422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.750 [2024-06-10 08:11:11.851441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.851461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.750 [2024-06-10 08:11:11.851479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.851499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.750 [2024-06-10 08:11:11.851517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.851537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.750 [2024-06-10 08:11:11.851555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.851575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.750 [2024-06-10 08:11:11.851593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.851613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.750 [2024-06-10 08:11:11.851631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.851651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.750 [2024-06-10 08:11:11.851669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.851689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.750 [2024-06-10 08:11:11.851715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.851736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.750 [2024-06-10 08:11:11.851759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.750 [2024-06-10 08:11:11.851790] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86830 is same with the state(5) to be set 00:16:04.750 [2024-06-10 08:11:11.851826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:04.750 [2024-06-10 08:11:11.851840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:04.750 [2024-06-10 08:11:11.851856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71960 len:8 PRP1 0x0 PRP2 0x0 00:16:04.751 [2024-06-10 08:11:11.851873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:11.851892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:04.751 [2024-06-10 08:11:11.851911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:04.751 [2024-06-10 08:11:11.851926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72856 len:8 PRP1 0x0 PRP2 0x0 00:16:04.751 [2024-06-10 08:11:11.851944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:11.852017] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d86830 was disconnected and freed. reset controller. 00:16:04.751 [2024-06-10 08:11:11.852039] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:04.751 [2024-06-10 08:11:11.852114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.751 [2024-06-10 08:11:11.852140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:11.852160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.751 [2024-06-10 08:11:11.852178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:11.852201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.751 [2024-06-10 08:11:11.852219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:11.852238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.751 [2024-06-10 08:11:11.852255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:11.852273] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:04.751 [2024-06-10 08:11:11.852320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c81090 (9): Bad file descriptor 00:16:04.751 [2024-06-10 08:11:11.857177] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:04.751 [2024-06-10 08:11:11.890676] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:04.751 [2024-06-10 08:11:15.441247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.751 [2024-06-10 08:11:15.441326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.441398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.751 [2024-06-10 08:11:15.441415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.441431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.751 [2024-06-10 08:11:15.441444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.441459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.751 [2024-06-10 08:11:15.441472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.441486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.751 [2024-06-10 08:11:15.441499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.441514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.751 [2024-06-10 08:11:15.441527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.441541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.751 [2024-06-10 08:11:15.441554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.441568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.751 [2024-06-10 08:11:15.441581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.441596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.751 [2024-06-10 08:11:15.441609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.441623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.751 [2024-06-10 08:11:15.441636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.441667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.751 [2024-06-10 08:11:15.441697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.441713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.751 [2024-06-10 08:11:15.441728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.441744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.751 [2024-06-10 08:11:15.441758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.441774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.751 [2024-06-10 08:11:15.441797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.441814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.751 [2024-06-10 08:11:15.441829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.441858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.751 [2024-06-10 08:11:15.441875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.441892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.751 [2024-06-10 08:11:15.441906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.441923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.751 [2024-06-10 08:11:15.441940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.441957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.751 [2024-06-10 08:11:15.441972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.441988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.751 [2024-06-10 08:11:15.442003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.442035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.751 [2024-06-10 08:11:15.442064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.442095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.751 [2024-06-10 08:11:15.442108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.442137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.751 [2024-06-10 08:11:15.442151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.442165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.751 [2024-06-10 08:11:15.442178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.442192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.751 [2024-06-10 08:11:15.442205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.442220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.751 [2024-06-10 08:11:15.442233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.442255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.751 [2024-06-10 08:11:15.442268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.751 [2024-06-10 08:11:15.442283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.751 [2024-06-10 08:11:15.442296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.752 [2024-06-10 08:11:15.442324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.752 [2024-06-10 08:11:15.442351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.752 [2024-06-10 08:11:15.442379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.752 [2024-06-10 08:11:15.442406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.752 [2024-06-10 08:11:15.442434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.752 [2024-06-10 08:11:15.442462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.752 [2024-06-10 08:11:15.442491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.752 [2024-06-10 08:11:15.442518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.752 [2024-06-10 08:11:15.442546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.752 [2024-06-10 08:11:15.442573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.752 [2024-06-10 08:11:15.442606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.752 [2024-06-10 08:11:15.442635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.752 [2024-06-10 08:11:15.442679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.752 [2024-06-10 08:11:15.442710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.752 [2024-06-10 08:11:15.442741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.752 [2024-06-10 08:11:15.442772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.752 [2024-06-10 08:11:15.442803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.752 [2024-06-10 08:11:15.442834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.752 [2024-06-10 08:11:15.442876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.752 [2024-06-10 08:11:15.442908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.752 [2024-06-10 08:11:15.442940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.752 [2024-06-10 08:11:15.442971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.752 [2024-06-10 08:11:15.442988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.752 [2024-06-10 08:11:15.443003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.753 [2024-06-10 08:11:15.443070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.753 [2024-06-10 08:11:15.443100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.753 [2024-06-10 08:11:15.443128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.753 [2024-06-10 08:11:15.443156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.753 [2024-06-10 08:11:15.443184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.753 [2024-06-10 08:11:15.443226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.753 [2024-06-10 08:11:15.443254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.753 [2024-06-10 08:11:15.443281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.753 [2024-06-10 08:11:15.443308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.753 [2024-06-10 08:11:15.443335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.753 [2024-06-10 08:11:15.443363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.753 [2024-06-10 08:11:15.443390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.753 [2024-06-10 08:11:15.443417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.753 [2024-06-10 08:11:15.443451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.753 [2024-06-10 08:11:15.443479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.753 [2024-06-10 08:11:15.443507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.753 [2024-06-10 08:11:15.443535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.753 [2024-06-10 08:11:15.443562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.753 [2024-06-10 08:11:15.443589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.753 [2024-06-10 08:11:15.443617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.753 [2024-06-10 08:11:15.443644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.753 [2024-06-10 08:11:15.443705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.753 [2024-06-10 08:11:15.443741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.753 [2024-06-10 08:11:15.443771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.753 [2024-06-10 08:11:15.443802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.753 [2024-06-10 08:11:15.443839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.753 [2024-06-10 08:11:15.443884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.753 [2024-06-10 08:11:15.443915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.753 [2024-06-10 08:11:15.443946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.753 [2024-06-10 08:11:15.443978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.443994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.753 [2024-06-10 08:11:15.444009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.444074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.753 [2024-06-10 08:11:15.444088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.444117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.753 [2024-06-10 08:11:15.444131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.444145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.753 [2024-06-10 08:11:15.444158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.444172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.753 [2024-06-10 08:11:15.444186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.444200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.753 [2024-06-10 08:11:15.444212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.753 [2024-06-10 08:11:15.444227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.753 [2024-06-10 08:11:15.444240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.444254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.754 [2024-06-10 08:11:15.444267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.444288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.754 [2024-06-10 08:11:15.444301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.444316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.754 [2024-06-10 08:11:15.444328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.444359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.754 [2024-06-10 08:11:15.444373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.444388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.754 [2024-06-10 08:11:15.444401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.444416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.754 [2024-06-10 08:11:15.444430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.444444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.754 [2024-06-10 08:11:15.444458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.444473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.754 [2024-06-10 08:11:15.444486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.444501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.754 [2024-06-10 08:11:15.444515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.444529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.754 [2024-06-10 08:11:15.444581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.444599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.754 [2024-06-10 08:11:15.444616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.444632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.754 [2024-06-10 08:11:15.444647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.444663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.754 [2024-06-10 08:11:15.444678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.444694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.754 [2024-06-10 08:11:15.444716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.444733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.754 [2024-06-10 08:11:15.444748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.444764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.754 [2024-06-10 08:11:15.444780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.444816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.754 [2024-06-10 08:11:15.444834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.444851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.754 [2024-06-10 08:11:15.444874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.444902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.754 [2024-06-10 08:11:15.444917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.444933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.754 [2024-06-10 08:11:15.444948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.444964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.754 [2024-06-10 08:11:15.444979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.445009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.754 [2024-06-10 08:11:15.445023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.445038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.754 [2024-06-10 08:11:15.445052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.445066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8b030 is same with the state(5) to be set 00:16:04.754 [2024-06-10 08:11:15.445082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:04.754 [2024-06-10 08:11:15.445092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:04.754 [2024-06-10 08:11:15.445103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95784 len:8 PRP1 0x0 PRP2 0x0 00:16:04.754 [2024-06-10 08:11:15.445116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.445130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:04.754 [2024-06-10 08:11:15.445140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:04.754 [2024-06-10 08:11:15.445150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96240 len:8 PRP1 0x0 PRP2 0x0 00:16:04.754 [2024-06-10 08:11:15.445169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.445183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:04.754 [2024-06-10 08:11:15.445192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:04.754 [2024-06-10 08:11:15.445202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96248 len:8 PRP1 0x0 PRP2 0x0 00:16:04.754 [2024-06-10 08:11:15.445215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.445228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:04.754 [2024-06-10 08:11:15.445238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:04.754 [2024-06-10 08:11:15.445248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96256 len:8 PRP1 0x0 PRP2 0x0 00:16:04.754 [2024-06-10 08:11:15.445259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.445272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:04.754 [2024-06-10 08:11:15.445291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:04.754 [2024-06-10 08:11:15.445301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96264 len:8 PRP1 0x0 PRP2 0x0 00:16:04.754 [2024-06-10 08:11:15.445313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.445326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:04.754 [2024-06-10 08:11:15.445336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:04.754 [2024-06-10 08:11:15.445346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96272 len:8 PRP1 0x0 PRP2 0x0 00:16:04.754 [2024-06-10 08:11:15.445358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.445371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:04.754 [2024-06-10 08:11:15.445381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:04.754 [2024-06-10 08:11:15.445390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96280 len:8 PRP1 0x0 PRP2 0x0 00:16:04.754 [2024-06-10 08:11:15.445403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.754 [2024-06-10 08:11:15.445415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:04.754 [2024-06-10 08:11:15.445425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:04.755 [2024-06-10 08:11:15.445435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96288 len:8 PRP1 0x0 PRP2 0x0 00:16:04.755 [2024-06-10 08:11:15.445447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:15.445459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:04.755 [2024-06-10 08:11:15.445469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:04.755 [2024-06-10 08:11:15.445478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96296 len:8 PRP1 0x0 PRP2 0x0 00:16:04.755 [2024-06-10 08:11:15.445490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:15.445503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:04.755 [2024-06-10 08:11:15.445513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:04.755 [2024-06-10 08:11:15.445528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96304 len:8 PRP1 0x0 PRP2 0x0 00:16:04.755 [2024-06-10 08:11:15.445541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:15.445554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:04.755 [2024-06-10 08:11:15.445563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:04.755 [2024-06-10 08:11:15.445573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96312 len:8 PRP1 0x0 PRP2 0x0 00:16:04.755 [2024-06-10 08:11:15.445586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:15.445598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:04.755 [2024-06-10 08:11:15.445608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:04.755 [2024-06-10 08:11:15.445617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96320 len:8 PRP1 0x0 PRP2 0x0 00:16:04.755 [2024-06-10 08:11:15.445629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:15.445642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:04.755 [2024-06-10 08:11:15.445651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:04.755 [2024-06-10 08:11:15.445698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96328 len:8 PRP1 0x0 PRP2 0x0 00:16:04.755 [2024-06-10 08:11:15.445711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:15.445725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:04.755 [2024-06-10 08:11:15.445735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:04.755 [2024-06-10 08:11:15.445746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96336 len:8 PRP1 0x0 PRP2 0x0 00:16:04.755 [2024-06-10 08:11:15.445759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:15.445773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:04.755 [2024-06-10 08:11:15.445783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:04.755 [2024-06-10 08:11:15.445802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96344 len:8 PRP1 0x0 PRP2 0x0 00:16:04.755 [2024-06-10 08:11:15.445816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:15.445840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:04.755 [2024-06-10 08:11:15.445853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:04.755 [2024-06-10 08:11:15.445864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96352 len:8 PRP1 0x0 PRP2 0x0 00:16:04.755 [2024-06-10 08:11:15.445877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:15.445892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:04.755 [2024-06-10 08:11:15.445902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:04.755 [2024-06-10 08:11:15.445913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96360 len:8 PRP1 0x0 PRP2 0x0 00:16:04.755 [2024-06-10 08:11:15.445926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:15.445990] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d8b030 was disconnected and freed. reset controller. 00:16:04.755 [2024-06-10 08:11:15.446016] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:16:04.755 [2024-06-10 08:11:15.446103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.755 [2024-06-10 08:11:15.446123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:15.446138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.755 [2024-06-10 08:11:15.446151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:15.446165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.755 [2024-06-10 08:11:15.446178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:15.446191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.755 [2024-06-10 08:11:15.446205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:15.446218] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:04.755 [2024-06-10 08:11:15.450119] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:04.755 [2024-06-10 08:11:15.450171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c81090 (9): Bad file descriptor 00:16:04.755 [2024-06-10 08:11:15.483715] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:04.755 [2024-06-10 08:11:20.012748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.755 [2024-06-10 08:11:20.012849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:20.012905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.755 [2024-06-10 08:11:20.012923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:20.012939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.755 [2024-06-10 08:11:20.012954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:20.012985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.755 [2024-06-10 08:11:20.012999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:20.013029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.755 [2024-06-10 08:11:20.013048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:20.013063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.755 [2024-06-10 08:11:20.013076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:20.013091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.755 [2024-06-10 08:11:20.013126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:20.013143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.755 [2024-06-10 08:11:20.013167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:20.013207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.755 [2024-06-10 08:11:20.013220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:20.013235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.755 [2024-06-10 08:11:20.013248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:20.013279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.755 [2024-06-10 08:11:20.013293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:20.013308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.755 [2024-06-10 08:11:20.013322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:20.013336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.755 [2024-06-10 08:11:20.013349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:20.013364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.755 [2024-06-10 08:11:20.013377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:20.013392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.755 [2024-06-10 08:11:20.013405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.755 [2024-06-10 08:11:20.013419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.756 [2024-06-10 08:11:20.013432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.013447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.756 [2024-06-10 08:11:20.013460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.013478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.756 [2024-06-10 08:11:20.013502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.013527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.756 [2024-06-10 08:11:20.013550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.013590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.756 [2024-06-10 08:11:20.013604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.013619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.756 [2024-06-10 08:11:20.013633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.013648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.756 [2024-06-10 08:11:20.013662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.013692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:43976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.756 [2024-06-10 08:11:20.013706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.013722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.756 [2024-06-10 08:11:20.013735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.013751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.756 [2024-06-10 08:11:20.013765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.013781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.756 [2024-06-10 08:11:20.013794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.013809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.756 [2024-06-10 08:11:20.013849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.013865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.756 [2024-06-10 08:11:20.013879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.013908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.756 [2024-06-10 08:11:20.013924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.013941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.756 [2024-06-10 08:11:20.013955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.013971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.756 [2024-06-10 08:11:20.013985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.756 [2024-06-10 08:11:20.014037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.756 [2024-06-10 08:11:20.014068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.756 [2024-06-10 08:11:20.014099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.756 [2024-06-10 08:11:20.014128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.756 [2024-06-10 08:11:20.014167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.756 [2024-06-10 08:11:20.014217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.756 [2024-06-10 08:11:20.014245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.756 [2024-06-10 08:11:20.014274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.756 [2024-06-10 08:11:20.014302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.756 [2024-06-10 08:11:20.014330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.756 [2024-06-10 08:11:20.014359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.756 [2024-06-10 08:11:20.014387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.756 [2024-06-10 08:11:20.014415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.756 [2024-06-10 08:11:20.014450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.756 [2024-06-10 08:11:20.014479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.756 [2024-06-10 08:11:20.014508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.756 [2024-06-10 08:11:20.014546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.756 [2024-06-10 08:11:20.014574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.756 [2024-06-10 08:11:20.014604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.756 [2024-06-10 08:11:20.014632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.756 [2024-06-10 08:11:20.014660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.756 [2024-06-10 08:11:20.014706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.756 [2024-06-10 08:11:20.014731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.014747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.757 [2024-06-10 08:11:20.014772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.014788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.757 [2024-06-10 08:11:20.014802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.014818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.757 [2024-06-10 08:11:20.014832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.014859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.757 [2024-06-10 08:11:20.014874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.014897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.757 [2024-06-10 08:11:20.014912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.014928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.757 [2024-06-10 08:11:20.014950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.014965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.757 [2024-06-10 08:11:20.014980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.014996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.757 [2024-06-10 08:11:20.015010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.015025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.757 [2024-06-10 08:11:20.015054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.015085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.757 [2024-06-10 08:11:20.015098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.015128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.757 [2024-06-10 08:11:20.015141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.015165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.757 [2024-06-10 08:11:20.015178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.015193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.757 [2024-06-10 08:11:20.015207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.015221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.757 [2024-06-10 08:11:20.015235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.015249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.757 [2024-06-10 08:11:20.015262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.015276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.757 [2024-06-10 08:11:20.015289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.015304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.757 [2024-06-10 08:11:20.015323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.015346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.757 [2024-06-10 08:11:20.015359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.015373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.757 [2024-06-10 08:11:20.015402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.015417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.757 [2024-06-10 08:11:20.015430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.015445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.757 [2024-06-10 08:11:20.015458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.015473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.757 [2024-06-10 08:11:20.015487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.015505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.757 [2024-06-10 08:11:20.015528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.015543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.757 [2024-06-10 08:11:20.015556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.015571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.757 [2024-06-10 08:11:20.015584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.015599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.757 [2024-06-10 08:11:20.015612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.015627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.757 [2024-06-10 08:11:20.015640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.015655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.757 [2024-06-10 08:11:20.015685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.757 [2024-06-10 08:11:20.015718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.757 [2024-06-10 08:11:20.015733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.015771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.015787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.015803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.015818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.015834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.015849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.015866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.015880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.015909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.015925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.015941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.015956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.015982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.015996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.016041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.016101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.016146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.016184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.016214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.758 [2024-06-10 08:11:20.016242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.758 [2024-06-10 08:11:20.016280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.758 [2024-06-10 08:11:20.016310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.758 [2024-06-10 08:11:20.016341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.758 [2024-06-10 08:11:20.016370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.758 [2024-06-10 08:11:20.016414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.758 [2024-06-10 08:11:20.016443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.758 [2024-06-10 08:11:20.016471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.758 [2024-06-10 08:11:20.016515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.758 [2024-06-10 08:11:20.016601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.758 [2024-06-10 08:11:20.016634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.758 [2024-06-10 08:11:20.016665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.758 [2024-06-10 08:11:20.016700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.758 [2024-06-10 08:11:20.016739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.758 [2024-06-10 08:11:20.016771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.758 [2024-06-10 08:11:20.016813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.016844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.016887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.016918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.016950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.016976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.016991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.017037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.017062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.017077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.017091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.017106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.017120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.017135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.017149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.017212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.758 [2024-06-10 08:11:20.017226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.758 [2024-06-10 08:11:20.017248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.759 [2024-06-10 08:11:20.017262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.759 [2024-06-10 08:11:20.017278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.759 [2024-06-10 08:11:20.017293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.759 [2024-06-10 08:11:20.017308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.759 [2024-06-10 08:11:20.017322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.759 [2024-06-10 08:11:20.017338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.759 [2024-06-10 08:11:20.017351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.759 [2024-06-10 08:11:20.017367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.759 [2024-06-10 08:11:20.017381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.759 [2024-06-10 08:11:20.017396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.759 [2024-06-10 08:11:20.017421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.759 [2024-06-10 08:11:20.017437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.759 [2024-06-10 08:11:20.017451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.759 [2024-06-10 08:11:20.017465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da5d30 is same with the state(5) to be set 00:16:04.759 [2024-06-10 08:11:20.017481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:04.759 [2024-06-10 08:11:20.017502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:04.759 [2024-06-10 08:11:20.017520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44304 len:8 PRP1 0x0 PRP2 0x0 00:16:04.759 [2024-06-10 08:11:20.017533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.759 [2024-06-10 08:11:20.017616] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1da5d30 was disconnected and freed. reset controller. 00:16:04.759 [2024-06-10 08:11:20.017634] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:16:04.759 [2024-06-10 08:11:20.017713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.759 [2024-06-10 08:11:20.017734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.759 [2024-06-10 08:11:20.017760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.759 [2024-06-10 08:11:20.017774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.759 [2024-06-10 08:11:20.017789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.759 [2024-06-10 08:11:20.017812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.759 [2024-06-10 08:11:20.017829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.759 [2024-06-10 08:11:20.017843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.759 [2024-06-10 08:11:20.017870] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:04.759 [2024-06-10 08:11:20.017920] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c81090 (9): Bad file descriptor 00:16:04.759 [2024-06-10 08:11:20.021939] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:04.759 [2024-06-10 08:11:20.055858] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:04.759 00:16:04.759 Latency(us) 00:16:04.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.759 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:04.759 Verification LBA range: start 0x0 length 0x4000 00:16:04.759 NVMe0n1 : 15.01 8893.78 34.74 184.19 0.00 14068.14 651.64 19541.64 00:16:04.759 =================================================================================================================== 00:16:04.759 Total : 8893.78 34.74 184.19 0.00 14068.14 651.64 19541.64 00:16:04.759 Received shutdown signal, test time was about 15.000000 seconds 00:16:04.759 00:16:04.759 Latency(us) 00:16:04.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.759 =================================================================================================================== 00:16:04.759 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:04.759 08:11:26 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:04.759 08:11:26 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:16:04.759 08:11:26 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:16:04.759 08:11:26 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75994 00:16:04.759 08:11:26 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75994 /var/tmp/bdevperf.sock 00:16:04.759 08:11:26 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:04.759 08:11:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 75994 ']' 00:16:04.759 08:11:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:04.759 08:11:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:04.759 08:11:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:04.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:04.759 08:11:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:04.759 08:11:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:05.326 08:11:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:05.326 08:11:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:16:05.326 08:11:27 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:05.585 [2024-06-10 08:11:27.273053] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:05.585 08:11:27 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:05.843 [2024-06-10 08:11:27.509379] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:05.843 08:11:27 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:06.101 NVMe0n1 00:16:06.101 08:11:27 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:06.360 00:16:06.360 08:11:28 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:06.619 00:16:06.619 08:11:28 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:06.619 08:11:28 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:16:06.878 08:11:28 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:07.137 08:11:28 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:16:10.425 08:11:31 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:10.425 08:11:31 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:16:10.425 08:11:32 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76075 00:16:10.425 08:11:32 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 76075 00:16:10.425 08:11:32 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:11.801 0 00:16:11.801 08:11:33 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:11.801 [2024-06-10 08:11:26.090335] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:16:11.801 [2024-06-10 08:11:26.090471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75994 ] 00:16:11.801 [2024-06-10 08:11:26.225412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.801 [2024-06-10 08:11:26.337242] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.801 [2024-06-10 08:11:26.408273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:11.801 [2024-06-10 08:11:28.910027] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:11.801 [2024-06-10 08:11:28.910226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.801 [2024-06-10 08:11:28.910261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.801 [2024-06-10 08:11:28.910282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.801 [2024-06-10 08:11:28.910296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.801 [2024-06-10 08:11:28.910311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.801 [2024-06-10 08:11:28.910325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.801 [2024-06-10 08:11:28.910339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.801 [2024-06-10 08:11:28.910353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.801 [2024-06-10 08:11:28.910368] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:11.801 [2024-06-10 08:11:28.910437] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:11.801 [2024-06-10 08:11:28.910474] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1683090 (9): Bad file descriptor 00:16:11.801 [2024-06-10 08:11:28.921321] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:11.801 Running I/O for 1 seconds... 00:16:11.801 00:16:11.801 Latency(us) 00:16:11.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.801 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:11.801 Verification LBA range: start 0x0 length 0x4000 00:16:11.801 NVMe0n1 : 1.01 6270.65 24.49 0.00 0.00 20330.21 2353.34 16801.05 00:16:11.801 =================================================================================================================== 00:16:11.801 Total : 6270.65 24.49 0.00 0.00 20330.21 2353.34 16801.05 00:16:11.801 08:11:33 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:11.801 08:11:33 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:16:11.801 08:11:33 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:12.060 08:11:33 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:12.060 08:11:33 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:16:12.319 08:11:34 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:12.578 08:11:34 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:16:15.875 08:11:37 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:16:15.875 08:11:37 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:15.875 08:11:37 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 75994 00:16:15.875 08:11:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 75994 ']' 00:16:15.875 08:11:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 75994 00:16:15.875 08:11:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:16:15.875 08:11:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:15.875 08:11:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 75994 00:16:15.875 killing process with pid 75994 00:16:15.875 08:11:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:15.875 08:11:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:15.875 08:11:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 75994' 00:16:15.875 08:11:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 75994 00:16:15.875 08:11:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 75994 00:16:16.147 08:11:37 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:16:16.147 08:11:37 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.405 08:11:38 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:16.405 08:11:38 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:16.405 08:11:38 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:16:16.405 08:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:16.405 08:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:16:16.405 08:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:16.405 08:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:16:16.405 08:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:16.405 08:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:16.405 rmmod nvme_tcp 00:16:16.405 rmmod nvme_fabrics 00:16:16.405 rmmod nvme_keyring 00:16:16.663 08:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:16.663 08:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:16:16.663 08:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:16:16.663 08:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 75740 ']' 00:16:16.663 08:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 75740 00:16:16.663 08:11:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 75740 ']' 00:16:16.663 08:11:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 75740 00:16:16.663 08:11:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:16:16.663 08:11:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:16.663 08:11:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 75740 00:16:16.663 killing process with pid 75740 00:16:16.663 08:11:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:16.663 08:11:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:16.663 08:11:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 75740' 00:16:16.663 08:11:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 75740 00:16:16.663 08:11:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 75740 00:16:16.922 08:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:16.922 08:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:16.922 08:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:16.922 08:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:16.922 08:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:16.922 08:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.922 08:11:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.922 08:11:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.922 08:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:16.922 00:16:16.922 real 0m33.011s 00:16:16.922 user 2m7.900s 00:16:16.922 sys 0m5.481s 00:16:16.922 08:11:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:16.923 08:11:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:16.923 ************************************ 00:16:16.923 END TEST nvmf_failover 00:16:16.923 ************************************ 00:16:16.923 08:11:38 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:16.923 08:11:38 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:16.923 08:11:38 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:16.923 08:11:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:16.923 ************************************ 00:16:16.923 START TEST nvmf_host_discovery 00:16:16.923 ************************************ 00:16:16.923 08:11:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:17.182 * Looking for test storage... 00:16:17.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:17.182 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:17.183 Cannot find device "nvmf_tgt_br" 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:17.183 Cannot find device "nvmf_tgt_br2" 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:17.183 Cannot find device "nvmf_tgt_br" 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:17.183 Cannot find device "nvmf_tgt_br2" 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:17.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:17.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:16:17.183 08:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:17.183 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:17.183 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:17.183 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:17.183 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:17.183 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:17.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:16:17.442 00:16:17.442 --- 10.0.0.2 ping statistics --- 00:16:17.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.442 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:17.442 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:17.442 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:16:17.442 00:16:17.442 --- 10.0.0.3 ping statistics --- 00:16:17.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.442 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:17.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:16:17.442 00:16:17.442 --- 10.0.0.1 ping statistics --- 00:16:17.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.442 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=76347 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 76347 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 76347 ']' 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:17.442 08:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.442 [2024-06-10 08:11:39.282124] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:16:17.442 [2024-06-10 08:11:39.282222] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.701 [2024-06-10 08:11:39.421239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.701 [2024-06-10 08:11:39.564593] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.701 [2024-06-10 08:11:39.564677] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.701 [2024-06-10 08:11:39.564689] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.701 [2024-06-10 08:11:39.564698] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.701 [2024-06-10 08:11:39.564705] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.701 [2024-06-10 08:11:39.564750] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.960 [2024-06-10 08:11:39.642331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.528 [2024-06-10 08:11:40.321377] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.528 [2024-06-10 08:11:40.329519] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.528 null0 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.528 null1 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76379 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76379 /tmp/host.sock 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 76379 ']' 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:18.528 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:18.528 08:11:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.787 [2024-06-10 08:11:40.420143] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:16:18.787 [2024-06-10 08:11:40.420274] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76379 ] 00:16:18.787 [2024-06-10 08:11:40.561962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.046 [2024-06-10 08:11:40.707840] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.046 [2024-06-10 08:11:40.784732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:19.613 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:19.613 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:16:19.613 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:19.613 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:19.613 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:19.613 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.613 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:19.613 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:19.613 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:19.613 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.613 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:19.613 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:19.613 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:19.613 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:19.613 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:19.613 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:19.613 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.613 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:19.613 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:19.613 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:19.872 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.132 [2024-06-10 08:11:41.802040] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.132 08:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:20.391 08:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:20.391 08:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == \n\v\m\e\0 ]] 00:16:20.391 08:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:16:20.649 [2024-06-10 08:11:42.434832] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:20.649 [2024-06-10 08:11:42.434899] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:20.649 [2024-06-10 08:11:42.434921] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:20.649 [2024-06-10 08:11:42.440902] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:20.649 [2024-06-10 08:11:42.497477] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:20.649 [2024-06-10 08:11:42.497511] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:21.215 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:16:21.215 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:21.215 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:16:21.215 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:21.215 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.215 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.215 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:21.215 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:21.215 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:21.215 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0 ]] 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:21.475 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.735 [2024-06-10 08:11:43.404192] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:21.735 [2024-06-10 08:11:43.404691] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:21.735 [2024-06-10 08:11:43.404737] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:21.735 [2024-06-10 08:11:43.410696] bdev_nvme.c:6902:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:16:21.735 [2024-06-10 08:11:43.468070] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:21.735 [2024-06-10 08:11:43.468093] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:21.735 [2024-06-10 08:11:43.468099] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:21.735 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:21.736 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:16:21.736 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:16:21.736 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:21.736 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:16:21.736 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:21.736 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:21.736 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.736 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.736 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.995 [2024-06-10 08:11:43.637154] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:21.995 [2024-06-10 08:11:43.637232] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:21.995 [2024-06-10 08:11:43.643126] bdev_nvme.c:6765:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:21.995 [2024-06-10 08:11:43.643194] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:21.995 [2024-06-10 08:11:43.643328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.995 [2024-06-10 08:11:43.643361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.995 [2024-06-10 08:11:43.643375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.995 [2024-06-10 08:11:43.643385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.995 [2024-06-10 08:11:43.643396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.995 [2024-06-10 08:11:43.643405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.995 [2024-06-10 08:11:43.643415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.995 [2024-06-10 08:11:43.643424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.995 [2024-06-10 08:11:43.643433] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2252870 is same with the state(5) to be set 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:21.995 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:21.996 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.996 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4421 == \4\4\2\1 ]] 00:16:21.996 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:16:21.996 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:21.996 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:21.996 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:21.996 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:21.996 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:16:21.996 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:16:21.996 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:21.996 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:16:21.996 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:21.996 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.996 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.996 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:21.996 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:22.255 08:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.255 08:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:16:22.255 08:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:16:22.255 08:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:22.255 08:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:22.255 08:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:22.255 08:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:22.255 08:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:16:22.255 08:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:16:22.255 08:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:22.255 08:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:16:22.255 08:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:22.255 08:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.255 08:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.255 08:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:22.256 08:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.256 08:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:22.256 08:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:22.256 08:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:16:22.256 08:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:16:22.256 08:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:22.256 08:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.256 08:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:23.634 [2024-06-10 08:11:45.085709] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:23.634 [2024-06-10 08:11:45.085755] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:23.634 [2024-06-10 08:11:45.085791] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:23.634 [2024-06-10 08:11:45.091744] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:23.634 [2024-06-10 08:11:45.151919] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:23.635 [2024-06-10 08:11:45.151989] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:23.635 request: 00:16:23.635 { 00:16:23.635 "name": "nvme", 00:16:23.635 "trtype": "tcp", 00:16:23.635 "traddr": "10.0.0.2", 00:16:23.635 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:23.635 "adrfam": "ipv4", 00:16:23.635 "trsvcid": "8009", 00:16:23.635 "wait_for_attach": true, 00:16:23.635 "method": "bdev_nvme_start_discovery", 00:16:23.635 "req_id": 1 00:16:23.635 } 00:16:23.635 Got JSON-RPC error response 00:16:23.635 response: 00:16:23.635 { 00:16:23.635 "code": -17, 00:16:23.635 "message": "File exists" 00:16:23.635 } 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:23.635 request: 00:16:23.635 { 00:16:23.635 "name": "nvme_second", 00:16:23.635 "trtype": "tcp", 00:16:23.635 "traddr": "10.0.0.2", 00:16:23.635 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:23.635 "adrfam": "ipv4", 00:16:23.635 "trsvcid": "8009", 00:16:23.635 "wait_for_attach": true, 00:16:23.635 "method": "bdev_nvme_start_discovery", 00:16:23.635 "req_id": 1 00:16:23.635 } 00:16:23.635 Got JSON-RPC error response 00:16:23.635 response: 00:16:23.635 { 00:16:23.635 "code": -17, 00:16:23.635 "message": "File exists" 00:16:23.635 } 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.635 08:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:24.572 [2024-06-10 08:11:46.433271] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:24.572 [2024-06-10 08:11:46.433360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f25c0 with addr=10.0.0.2, port=8010 00:16:24.572 [2024-06-10 08:11:46.433405] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:24.572 [2024-06-10 08:11:46.433417] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:24.572 [2024-06-10 08:11:46.433428] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:25.947 [2024-06-10 08:11:47.433241] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:25.947 [2024-06-10 08:11:47.433318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f25c0 with addr=10.0.0.2, port=8010 00:16:25.947 [2024-06-10 08:11:47.433348] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:25.947 [2024-06-10 08:11:47.433360] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:25.947 [2024-06-10 08:11:47.433371] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:26.879 [2024-06-10 08:11:48.433064] bdev_nvme.c:7021:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:26.879 request: 00:16:26.879 { 00:16:26.879 "name": "nvme_second", 00:16:26.879 "trtype": "tcp", 00:16:26.879 "traddr": "10.0.0.2", 00:16:26.879 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:26.879 "adrfam": "ipv4", 00:16:26.879 "trsvcid": "8010", 00:16:26.879 "attach_timeout_ms": 3000, 00:16:26.879 "method": "bdev_nvme_start_discovery", 00:16:26.879 "req_id": 1 00:16:26.879 } 00:16:26.879 Got JSON-RPC error response 00:16:26.879 response: 00:16:26.879 { 00:16:26.879 "code": -110, 00:16:26.879 "message": "Connection timed out" 00:16:26.879 } 00:16:26.879 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:16:26.879 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:16:26.879 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:26.879 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:26.879 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:26.879 08:11:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:26.879 08:11:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:26.879 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:26.879 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:26.879 08:11:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76379 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:26.880 rmmod nvme_tcp 00:16:26.880 rmmod nvme_fabrics 00:16:26.880 rmmod nvme_keyring 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 76347 ']' 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 76347 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@949 -- # '[' -z 76347 ']' 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # kill -0 76347 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # uname 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 76347 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:26.880 killing process with pid 76347 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 76347' 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@968 -- # kill 76347 00:16:26.880 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@973 -- # wait 76347 00:16:27.137 08:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:27.137 08:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:27.137 08:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:27.137 08:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:27.137 08:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:27.137 08:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.137 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.137 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.137 08:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:27.137 00:16:27.137 real 0m10.156s 00:16:27.137 user 0m19.575s 00:16:27.137 sys 0m2.074s 00:16:27.137 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:27.137 08:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.137 ************************************ 00:16:27.137 END TEST nvmf_host_discovery 00:16:27.137 ************************************ 00:16:27.137 08:11:48 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:27.137 08:11:48 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:27.137 08:11:48 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:27.137 08:11:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:27.137 ************************************ 00:16:27.137 START TEST nvmf_host_multipath_status 00:16:27.137 ************************************ 00:16:27.137 08:11:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:27.396 * Looking for test storage... 00:16:27.396 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:27.396 Cannot find device "nvmf_tgt_br" 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:27.396 Cannot find device "nvmf_tgt_br2" 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:27.396 Cannot find device "nvmf_tgt_br" 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:27.396 Cannot find device "nvmf_tgt_br2" 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:27.396 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:27.396 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:27.396 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:27.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:16:27.654 00:16:27.654 --- 10.0.0.2 ping statistics --- 00:16:27.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.654 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:27.654 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:27.654 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:16:27.654 00:16:27.654 --- 10.0.0.3 ping statistics --- 00:16:27.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.654 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:27.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:27.654 00:16:27.654 --- 10.0.0.1 ping statistics --- 00:16:27.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.654 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=76832 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 76832 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 76832 ']' 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:27.654 08:11:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:27.655 [2024-06-10 08:11:49.478841] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:16:27.655 [2024-06-10 08:11:49.478938] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.912 [2024-06-10 08:11:49.619037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:27.912 [2024-06-10 08:11:49.763327] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.912 [2024-06-10 08:11:49.763651] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.912 [2024-06-10 08:11:49.763746] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.912 [2024-06-10 08:11:49.763858] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.912 [2024-06-10 08:11:49.763938] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.912 [2024-06-10 08:11:49.764127] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.912 [2024-06-10 08:11:49.764157] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.170 [2024-06-10 08:11:49.829652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:28.736 08:11:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:28.736 08:11:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:16:28.736 08:11:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:28.736 08:11:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:28.736 08:11:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:28.736 08:11:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.736 08:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76832 00:16:28.736 08:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:29.034 [2024-06-10 08:11:50.746685] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.034 08:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:29.292 Malloc0 00:16:29.292 08:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:29.551 08:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:29.809 08:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.067 [2024-06-10 08:11:51.724995] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.067 08:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:30.326 [2024-06-10 08:11:51.953260] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:30.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:30.326 08:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:30.326 08:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76882 00:16:30.326 08:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:30.326 08:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76882 /var/tmp/bdevperf.sock 00:16:30.326 08:11:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 76882 ']' 00:16:30.326 08:11:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:30.326 08:11:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:30.326 08:11:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:30.326 08:11:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:30.326 08:11:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:31.261 08:11:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:31.261 08:11:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:16:31.261 08:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:31.519 08:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:16:31.778 Nvme0n1 00:16:31.778 08:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:32.035 Nvme0n1 00:16:32.035 08:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:32.035 08:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:33.937 08:11:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:33.937 08:11:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:34.519 08:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:34.519 08:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:35.905 08:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:35.905 08:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:35.905 08:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.905 08:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:35.905 08:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.905 08:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:35.905 08:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.905 08:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:36.247 08:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:36.247 08:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:36.247 08:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.247 08:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:36.247 08:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.247 08:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:36.247 08:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:36.247 08:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.507 08:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.507 08:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:36.507 08:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.507 08:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:36.766 08:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.766 08:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:36.766 08:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:36.766 08:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.025 08:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.025 08:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:37.025 08:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:37.284 08:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:37.544 08:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:38.922 08:12:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:38.922 08:12:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:38.922 08:12:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.922 08:12:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:38.922 08:12:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:38.922 08:12:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:38.922 08:12:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.922 08:12:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:39.180 08:12:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.180 08:12:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:39.180 08:12:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:39.180 08:12:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.438 08:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.438 08:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:39.438 08:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.438 08:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:39.696 08:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.696 08:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:39.696 08:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.696 08:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:39.955 08:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.955 08:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:39.955 08:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.955 08:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:39.955 08:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.955 08:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:39.955 08:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:40.213 08:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:40.779 08:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:41.713 08:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:41.713 08:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:41.713 08:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.713 08:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:41.972 08:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:41.972 08:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:41.972 08:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.972 08:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:42.231 08:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:42.231 08:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:42.231 08:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:42.231 08:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.231 08:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.231 08:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:42.231 08:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:42.231 08:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.489 08:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.489 08:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:42.489 08:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.489 08:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:42.747 08:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.747 08:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:42.747 08:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.747 08:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:43.006 08:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.006 08:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:43.006 08:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:43.265 08:12:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:43.524 08:12:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:44.899 08:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:44.900 08:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:44.900 08:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.900 08:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:44.900 08:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.900 08:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:44.900 08:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:44.900 08:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.158 08:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:45.158 08:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:45.158 08:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.158 08:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:45.417 08:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.417 08:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:45.417 08:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.417 08:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:45.676 08:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.676 08:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:45.676 08:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:45.676 08:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.935 08:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.935 08:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:45.935 08:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:45.935 08:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.193 08:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:46.193 08:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:46.193 08:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:46.452 08:12:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:46.711 08:12:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:47.648 08:12:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:47.648 08:12:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:47.648 08:12:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.648 08:12:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:47.906 08:12:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:47.906 08:12:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:47.906 08:12:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:47.906 08:12:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.165 08:12:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:48.165 08:12:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:48.165 08:12:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.165 08:12:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:48.424 08:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:48.424 08:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:48.424 08:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:48.424 08:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.683 08:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:48.683 08:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:48.683 08:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.683 08:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:48.942 08:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:48.942 08:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:48.942 08:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.942 08:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:49.203 08:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:49.203 08:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:49.203 08:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:49.499 08:12:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:49.758 08:12:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:50.697 08:12:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:50.697 08:12:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:50.697 08:12:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.697 08:12:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:50.956 08:12:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:50.956 08:12:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:50.956 08:12:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:50.956 08:12:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:51.215 08:12:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:51.215 08:12:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:51.215 08:12:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:51.215 08:12:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:51.473 08:12:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:51.473 08:12:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:51.473 08:12:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:51.474 08:12:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:51.732 08:12:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:51.732 08:12:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:51.732 08:12:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:51.732 08:12:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:51.991 08:12:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:51.991 08:12:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:51.991 08:12:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:51.991 08:12:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:52.249 08:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:52.249 08:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:52.508 08:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:52.508 08:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:52.766 08:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:53.023 08:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:54.399 08:12:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:54.399 08:12:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:54.399 08:12:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.399 08:12:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:54.399 08:12:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:54.399 08:12:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:54.399 08:12:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.399 08:12:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:54.658 08:12:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:54.658 08:12:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:54.658 08:12:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.658 08:12:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:54.917 08:12:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:54.917 08:12:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:54.917 08:12:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.917 08:12:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:55.175 08:12:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:55.175 08:12:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:55.175 08:12:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:55.175 08:12:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.433 08:12:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:55.433 08:12:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:55.433 08:12:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:55.433 08:12:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.692 08:12:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:55.692 08:12:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:55.692 08:12:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:55.950 08:12:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:56.209 08:12:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:57.144 08:12:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:57.144 08:12:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:57.144 08:12:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.144 08:12:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:57.401 08:12:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:57.401 08:12:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:57.401 08:12:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.401 08:12:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:57.659 08:12:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:57.659 08:12:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:57.659 08:12:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.659 08:12:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:57.917 08:12:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:57.917 08:12:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:57.917 08:12:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:57.917 08:12:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.176 08:12:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.176 08:12:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:58.176 08:12:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.176 08:12:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:58.434 08:12:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.434 08:12:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:58.434 08:12:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.434 08:12:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:58.693 08:12:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.693 08:12:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:58.693 08:12:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:58.951 08:12:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:59.209 08:12:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:17:00.144 08:12:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:17:00.144 08:12:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:00.403 08:12:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.403 08:12:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:00.661 08:12:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:00.661 08:12:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:00.661 08:12:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.661 08:12:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:00.927 08:12:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:00.927 08:12:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:00.927 08:12:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.927 08:12:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:01.199 08:12:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:01.199 08:12:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:01.199 08:12:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.199 08:12:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:01.199 08:12:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:01.199 08:12:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:01.199 08:12:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.199 08:12:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:01.457 08:12:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:01.457 08:12:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:01.457 08:12:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.457 08:12:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:01.716 08:12:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:01.717 08:12:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:17:01.717 08:12:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:01.977 08:12:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:02.234 08:12:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:17:03.609 08:12:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:17:03.609 08:12:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:03.609 08:12:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:03.609 08:12:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:03.609 08:12:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:03.609 08:12:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:03.609 08:12:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:03.609 08:12:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:03.868 08:12:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:03.868 08:12:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:03.868 08:12:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:03.868 08:12:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:04.127 08:12:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:04.127 08:12:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:04.127 08:12:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:04.127 08:12:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:04.385 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:04.385 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:04.385 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:04.385 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:04.684 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:04.684 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:04.684 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:04.684 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:04.943 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:04.943 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76882 00:17:04.943 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 76882 ']' 00:17:04.943 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 76882 00:17:04.943 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:17:04.943 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:04.943 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 76882 00:17:04.943 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:17:04.943 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:17:04.943 killing process with pid 76882 00:17:04.943 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 76882' 00:17:04.943 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 76882 00:17:04.943 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 76882 00:17:04.943 Connection closed with partial response: 00:17:04.943 00:17:04.943 00:17:05.205 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76882 00:17:05.205 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:05.205 [2024-06-10 08:11:52.014835] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:17:05.206 [2024-06-10 08:11:52.015056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76882 ] 00:17:05.206 [2024-06-10 08:11:52.149023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.206 [2024-06-10 08:11:52.285262] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.206 [2024-06-10 08:11:52.344716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:05.206 Running I/O for 90 seconds... 00:17:05.206 [2024-06-10 08:12:08.172591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.206 [2024-06-10 08:12:08.172707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.172776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.206 [2024-06-10 08:12:08.172809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.172835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.206 [2024-06-10 08:12:08.172851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.172872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.206 [2024-06-10 08:12:08.172886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.172907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.206 [2024-06-10 08:12:08.172922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.172944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.206 [2024-06-10 08:12:08.172958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.172980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.206 [2024-06-10 08:12:08.172995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.173017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.206 [2024-06-10 08:12:08.173031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.173053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.206 [2024-06-10 08:12:08.173067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.173089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.206 [2024-06-10 08:12:08.173104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.173126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.206 [2024-06-10 08:12:08.173163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.173187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.206 [2024-06-10 08:12:08.173201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.173222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.206 [2024-06-10 08:12:08.173238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.173259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.206 [2024-06-10 08:12:08.173273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.173293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.206 [2024-06-10 08:12:08.173307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.173330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.206 [2024-06-10 08:12:08.173344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.173368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.206 [2024-06-10 08:12:08.173382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.173404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.206 [2024-06-10 08:12:08.173418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.173439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:34080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.206 [2024-06-10 08:12:08.173454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.173475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.206 [2024-06-10 08:12:08.173489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.173510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.206 [2024-06-10 08:12:08.173524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.173545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:34104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.206 [2024-06-10 08:12:08.173559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.173581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:34112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.206 [2024-06-10 08:12:08.173605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.173629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:34120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.206 [2024-06-10 08:12:08.173644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.173772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.206 [2024-06-10 08:12:08.173811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.173836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.206 [2024-06-10 08:12:08.173853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:05.206 [2024-06-10 08:12:08.173875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.207 [2024-06-10 08:12:08.173889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.173912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.207 [2024-06-10 08:12:08.173927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.173948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.207 [2024-06-10 08:12:08.173963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.173984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.207 [2024-06-10 08:12:08.173998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.207 [2024-06-10 08:12:08.174034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.207 [2024-06-10 08:12:08.174071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.207 [2024-06-10 08:12:08.174106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.207 [2024-06-10 08:12:08.174141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.207 [2024-06-10 08:12:08.174177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.207 [2024-06-10 08:12:08.174229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:34160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.207 [2024-06-10 08:12:08.174264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:34168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.207 [2024-06-10 08:12:08.174300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.207 [2024-06-10 08:12:08.174336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.207 [2024-06-10 08:12:08.174371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.207 [2024-06-10 08:12:08.174408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.207 [2024-06-10 08:12:08.174444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.207 [2024-06-10 08:12:08.174480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.207 [2024-06-10 08:12:08.174516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.207 [2024-06-10 08:12:08.174551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.207 [2024-06-10 08:12:08.174587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.207 [2024-06-10 08:12:08.174622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.207 [2024-06-10 08:12:08.174668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.207 [2024-06-10 08:12:08.174704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.207 [2024-06-10 08:12:08.174739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.207 [2024-06-10 08:12:08.174775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.207 [2024-06-10 08:12:08.174824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.207 [2024-06-10 08:12:08.174860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.207 [2024-06-10 08:12:08.174896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:05.207 [2024-06-10 08:12:08.174917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.208 [2024-06-10 08:12:08.174932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.174954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.208 [2024-06-10 08:12:08.174968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.174994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.208 [2024-06-10 08:12:08.175011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.208 [2024-06-10 08:12:08.175047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.208 [2024-06-10 08:12:08.175083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.208 [2024-06-10 08:12:08.175131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.208 [2024-06-10 08:12:08.175169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.208 [2024-06-10 08:12:08.175204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.208 [2024-06-10 08:12:08.175240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:34760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.208 [2024-06-10 08:12:08.175277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.208 [2024-06-10 08:12:08.175313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.208 [2024-06-10 08:12:08.175349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:34208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.208 [2024-06-10 08:12:08.175385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:34216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.208 [2024-06-10 08:12:08.175420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.208 [2024-06-10 08:12:08.175457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.208 [2024-06-10 08:12:08.175492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.208 [2024-06-10 08:12:08.175527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.208 [2024-06-10 08:12:08.175571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.208 [2024-06-10 08:12:08.175629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:34776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.208 [2024-06-10 08:12:08.175666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.208 [2024-06-10 08:12:08.175702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.208 [2024-06-10 08:12:08.175738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.208 [2024-06-10 08:12:08.175775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.208 [2024-06-10 08:12:08.175829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.208 [2024-06-10 08:12:08.175865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.208 [2024-06-10 08:12:08.175903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:34832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.208 [2024-06-10 08:12:08.175941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:05.208 [2024-06-10 08:12:08.175974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.208 [2024-06-10 08:12:08.175990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.209 [2024-06-10 08:12:08.176025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.209 [2024-06-10 08:12:08.176061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.209 [2024-06-10 08:12:08.176107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.209 [2024-06-10 08:12:08.176143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.209 [2024-06-10 08:12:08.176178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:34888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.209 [2024-06-10 08:12:08.176214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.176250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.176285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.176321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.176357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.176392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.176428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.176464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:34312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.176502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.176547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.176589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.176625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.176660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.176710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.176745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:34368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.176791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:34376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.176830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.176868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.176904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.176926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.176940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.177794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.177822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.177858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.177887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:05.209 [2024-06-10 08:12:08.177919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.209 [2024-06-10 08:12:08.177936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:08.177965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:34432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.210 [2024-06-10 08:12:08.177981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:08.178011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.210 [2024-06-10 08:12:08.178027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:08.178056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.210 [2024-06-10 08:12:08.178071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:08.178107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.210 [2024-06-10 08:12:08.178123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:08.178153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.210 [2024-06-10 08:12:08.178168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:08.178199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.210 [2024-06-10 08:12:08.178214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:08.178243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.210 [2024-06-10 08:12:08.178258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:08.178287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.210 [2024-06-10 08:12:08.178302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:08.178332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.210 [2024-06-10 08:12:08.178348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:08.178393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.210 [2024-06-10 08:12:08.178413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:08.178443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.210 [2024-06-10 08:12:08.178468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:08.178499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.210 [2024-06-10 08:12:08.178514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:08.178544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.210 [2024-06-10 08:12:08.178558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:08.178588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.210 [2024-06-10 08:12:08.178603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:08.178632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.210 [2024-06-10 08:12:08.178647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:08.178676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.210 [2024-06-10 08:12:08.178691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:24.064488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.210 [2024-06-10 08:12:24.064581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:24.064644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.210 [2024-06-10 08:12:24.064663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:24.064686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.210 [2024-06-10 08:12:24.064701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:24.064741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.210 [2024-06-10 08:12:24.064756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:24.064777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.210 [2024-06-10 08:12:24.064810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:24.064833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.210 [2024-06-10 08:12:24.064847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:24.064869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.210 [2024-06-10 08:12:24.064913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:24.064938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.210 [2024-06-10 08:12:24.064953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:24.064974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.210 [2024-06-10 08:12:24.064988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:24.065009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.210 [2024-06-10 08:12:24.065024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:24.065045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.210 [2024-06-10 08:12:24.065058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:24.065080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.210 [2024-06-10 08:12:24.065094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.210 [2024-06-10 08:12:24.065115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.210 [2024-06-10 08:12:24.065129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.065150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.211 [2024-06-10 08:12:24.065164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.065185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.211 [2024-06-10 08:12:24.065198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.065226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.211 [2024-06-10 08:12:24.065240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.066980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.211 [2024-06-10 08:12:24.067009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.211 [2024-06-10 08:12:24.067053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.211 [2024-06-10 08:12:24.067090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.211 [2024-06-10 08:12:24.067141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.211 [2024-06-10 08:12:24.067176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.211 [2024-06-10 08:12:24.067213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.211 [2024-06-10 08:12:24.067248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.211 [2024-06-10 08:12:24.067284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.211 [2024-06-10 08:12:24.067319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.211 [2024-06-10 08:12:24.067354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.211 [2024-06-10 08:12:24.067392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.211 [2024-06-10 08:12:24.067428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.211 [2024-06-10 08:12:24.067464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.211 [2024-06-10 08:12:24.067500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.211 [2024-06-10 08:12:24.067535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.211 [2024-06-10 08:12:24.067579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.211 [2024-06-10 08:12:24.067615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.211 [2024-06-10 08:12:24.067650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.211 [2024-06-10 08:12:24.067687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.211 [2024-06-10 08:12:24.067722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.211 [2024-06-10 08:12:24.067758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.211 [2024-06-10 08:12:24.067811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.211 [2024-06-10 08:12:24.067847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:05.211 [2024-06-10 08:12:24.067868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.211 [2024-06-10 08:12:24.067883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:05.212 [2024-06-10 08:12:24.067904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.212 [2024-06-10 08:12:24.067918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:05.212 [2024-06-10 08:12:24.067939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.212 [2024-06-10 08:12:24.067953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:05.212 [2024-06-10 08:12:24.067974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.212 [2024-06-10 08:12:24.067989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:05.212 [2024-06-10 08:12:24.068011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:58008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.212 [2024-06-10 08:12:24.068034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.212 [2024-06-10 08:12:24.068057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.212 [2024-06-10 08:12:24.068072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:05.212 [2024-06-10 08:12:24.068093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.212 [2024-06-10 08:12:24.068111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:05.212 [2024-06-10 08:12:24.068133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.212 [2024-06-10 08:12:24.068148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:05.212 [2024-06-10 08:12:24.068188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.212 [2024-06-10 08:12:24.068207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:05.212 Received shutdown signal, test time was about 32.724551 seconds 00:17:05.212 00:17:05.212 Latency(us) 00:17:05.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.212 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:05.212 Verification LBA range: start 0x0 length 0x4000 00:17:05.212 Nvme0n1 : 32.72 7958.86 31.09 0.00 0.00 16050.93 207.59 4026531.84 00:17:05.212 =================================================================================================================== 00:17:05.212 Total : 7958.86 31.09 0.00 0.00 16050.93 207.59 4026531.84 00:17:05.212 08:12:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:05.471 rmmod nvme_tcp 00:17:05.471 rmmod nvme_fabrics 00:17:05.471 rmmod nvme_keyring 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 76832 ']' 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 76832 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 76832 ']' 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 76832 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 76832 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 76832' 00:17:05.471 killing process with pid 76832 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 76832 00:17:05.471 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 76832 00:17:05.730 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:05.730 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:05.730 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:05.730 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:05.730 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:05.730 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.730 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.730 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.730 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:05.730 00:17:05.730 real 0m38.612s 00:17:05.730 user 2m3.782s 00:17:05.730 sys 0m11.880s 00:17:05.730 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:05.730 08:12:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:05.730 ************************************ 00:17:05.730 END TEST nvmf_host_multipath_status 00:17:05.730 ************************************ 00:17:05.989 08:12:27 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:05.989 08:12:27 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:05.989 08:12:27 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:05.989 08:12:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:05.989 ************************************ 00:17:05.989 START TEST nvmf_discovery_remove_ifc 00:17:05.989 ************************************ 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:05.989 * Looking for test storage... 00:17:05.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:05.989 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:05.990 Cannot find device "nvmf_tgt_br" 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:05.990 Cannot find device "nvmf_tgt_br2" 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:05.990 Cannot find device "nvmf_tgt_br" 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:05.990 Cannot find device "nvmf_tgt_br2" 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:05.990 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:06.249 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:06.249 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:06.249 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:17:06.249 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:06.249 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:06.249 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:17:06.250 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:06.250 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:06.250 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:06.250 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:06.250 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:06.250 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:06.250 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:06.250 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:06.250 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:06.250 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:06.250 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:06.250 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:06.250 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:06.250 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:06.250 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:06.250 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:06.250 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:06.250 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:06.250 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:06.250 08:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:06.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:17:06.250 00:17:06.250 --- 10.0.0.2 ping statistics --- 00:17:06.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.250 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:06.250 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:06.250 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:17:06.250 00:17:06.250 --- 10.0.0.3 ping statistics --- 00:17:06.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.250 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:06.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:06.250 00:17:06.250 --- 10.0.0.1 ping statistics --- 00:17:06.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.250 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=77665 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 77665 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 77665 ']' 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:06.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:06.250 08:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:06.509 [2024-06-10 08:12:28.119556] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:17:06.509 [2024-06-10 08:12:28.119677] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.509 [2024-06-10 08:12:28.254750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.509 [2024-06-10 08:12:28.360532] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.509 [2024-06-10 08:12:28.360606] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.509 [2024-06-10 08:12:28.360633] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.509 [2024-06-10 08:12:28.360641] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.509 [2024-06-10 08:12:28.360648] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.509 [2024-06-10 08:12:28.360670] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.768 [2024-06-10 08:12:28.414644] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:07.336 08:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:07.336 08:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:17:07.336 08:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:07.336 08:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:07.336 08:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:07.336 08:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.336 08:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:07.336 08:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:07.336 08:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:07.336 [2024-06-10 08:12:29.135929] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.336 [2024-06-10 08:12:29.144094] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:07.336 null0 00:17:07.336 [2024-06-10 08:12:29.175963] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.336 08:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:07.336 08:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77697 00:17:07.336 08:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:07.336 08:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77697 /tmp/host.sock 00:17:07.336 08:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 77697 ']' 00:17:07.336 08:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:17:07.336 08:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:07.336 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:07.336 08:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:07.336 08:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:07.336 08:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:07.595 [2024-06-10 08:12:29.246572] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:17:07.595 [2024-06-10 08:12:29.246684] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77697 ] 00:17:07.595 [2024-06-10 08:12:29.385029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.855 [2024-06-10 08:12:29.515105] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.423 08:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:08.423 08:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:17:08.423 08:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:08.423 08:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:08.423 08:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:08.423 08:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:08.423 08:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:08.423 08:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:08.423 08:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:08.423 08:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:08.682 [2024-06-10 08:12:30.316428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:08.682 08:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:08.682 08:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:08.682 08:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:08.682 08:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:09.619 [2024-06-10 08:12:31.369152] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:09.619 [2024-06-10 08:12:31.369207] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:09.619 [2024-06-10 08:12:31.369228] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:09.619 [2024-06-10 08:12:31.375197] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:09.619 [2024-06-10 08:12:31.432367] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:09.619 [2024-06-10 08:12:31.432457] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:09.619 [2024-06-10 08:12:31.432486] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:09.619 [2024-06-10 08:12:31.432503] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:09.619 [2024-06-10 08:12:31.432528] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:09.619 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.619 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:09.619 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:09.619 [2024-06-10 08:12:31.437753] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xca2400 was disconnected and freed. delete nvme_qpair. 00:17:09.619 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:09.619 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:09.619 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:09.619 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:09.619 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.619 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:09.619 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.878 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:09.878 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:17:09.878 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:09.878 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:09.878 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:09.878 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:09.878 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:09.878 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.878 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:09.878 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:09.878 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:09.878 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.878 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:09.878 08:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:10.815 08:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:10.815 08:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:10.815 08:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:10.815 08:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.815 08:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:10.815 08:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:10.815 08:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:10.815 08:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.815 08:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:10.815 08:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:12.194 08:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:12.194 08:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:12.194 08:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:12.194 08:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:12.194 08:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:12.194 08:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:12.194 08:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:12.194 08:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:12.194 08:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:12.194 08:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:13.131 08:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:13.131 08:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:13.131 08:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:13.131 08:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:13.131 08:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:13.131 08:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:13.131 08:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:13.131 08:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:13.131 08:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:13.131 08:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:14.083 08:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:14.083 08:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:14.083 08:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:14.083 08:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:14.083 08:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:14.083 08:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:14.083 08:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:14.083 08:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:14.083 08:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:14.083 08:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:15.054 08:12:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:15.054 08:12:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:15.054 08:12:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:15.054 08:12:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:15.054 08:12:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:15.054 08:12:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:15.054 08:12:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:15.054 08:12:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:15.054 [2024-06-10 08:12:36.859863] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:15.054 [2024-06-10 08:12:36.859943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.054 [2024-06-10 08:12:36.859959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.054 [2024-06-10 08:12:36.859973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.054 [2024-06-10 08:12:36.859982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.054 [2024-06-10 08:12:36.859992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.054 [2024-06-10 08:12:36.860001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.054 [2024-06-10 08:12:36.860011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.054 [2024-06-10 08:12:36.860019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.054 [2024-06-10 08:12:36.860029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.054 [2024-06-10 08:12:36.860037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.054 [2024-06-10 08:12:36.860047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d8b0 is same with the state(5) to be set 00:17:15.054 [2024-06-10 08:12:36.869860] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7d8b0 (9): Bad file descriptor 00:17:15.054 08:12:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:15.054 08:12:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:15.054 [2024-06-10 08:12:36.879879] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:16.431 08:12:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:16.431 08:12:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:16.431 08:12:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:16.431 08:12:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:16.431 08:12:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:16.431 08:12:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:16.431 08:12:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:16.431 [2024-06-10 08:12:37.926918] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:16.431 [2024-06-10 08:12:37.927032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc7d8b0 with addr=10.0.0.2, port=4420 00:17:16.431 [2024-06-10 08:12:37.927067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d8b0 is same with the state(5) to be set 00:17:16.431 [2024-06-10 08:12:37.927130] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7d8b0 (9): Bad file descriptor 00:17:16.431 [2024-06-10 08:12:37.927947] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:16.431 [2024-06-10 08:12:37.927999] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:16.431 [2024-06-10 08:12:37.928020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:16.431 [2024-06-10 08:12:37.928042] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:16.431 [2024-06-10 08:12:37.928084] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:16.431 [2024-06-10 08:12:37.928107] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:16.431 08:12:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:16.431 08:12:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:16.431 08:12:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:17.423 [2024-06-10 08:12:38.928189] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.423 [2024-06-10 08:12:38.928352] bdev_nvme.c:6729:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:17:17.423 [2024-06-10 08:12:38.928450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.423 [2024-06-10 08:12:38.928466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.423 [2024-06-10 08:12:38.928479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.423 [2024-06-10 08:12:38.928488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.423 [2024-06-10 08:12:38.928497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.423 [2024-06-10 08:12:38.928506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.423 [2024-06-10 08:12:38.928516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.423 [2024-06-10 08:12:38.928524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.423 [2024-06-10 08:12:38.928534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.423 [2024-06-10 08:12:38.928542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.423 [2024-06-10 08:12:38.928551] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:17.423 [2024-06-10 08:12:38.928586] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0c980 (9): Bad file descriptor 00:17:17.423 [2024-06-10 08:12:38.929596] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:17.423 [2024-06-10 08:12:38.929627] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:17.423 08:12:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:17.423 08:12:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:17.423 08:12:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:17.423 08:12:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.423 08:12:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:17.423 08:12:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:17.423 08:12:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:17.423 08:12:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.423 08:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:17.423 08:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:17.423 08:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:17.423 08:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:17.423 08:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:17.423 08:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:17.423 08:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.423 08:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:17.423 08:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:17.423 08:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:17.423 08:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:17.423 08:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.423 08:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:17.423 08:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:18.359 08:12:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:18.359 08:12:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:18.359 08:12:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:18.359 08:12:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:18.359 08:12:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:18.359 08:12:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:18.359 08:12:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:18.359 08:12:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:18.359 08:12:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:18.359 08:12:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:19.296 [2024-06-10 08:12:40.933619] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:19.296 [2024-06-10 08:12:40.933842] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:19.296 [2024-06-10 08:12:40.933877] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:19.296 [2024-06-10 08:12:40.939655] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:17:19.296 [2024-06-10 08:12:40.994924] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:19.296 [2024-06-10 08:12:40.995121] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:19.296 [2024-06-10 08:12:40.995185] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:19.296 [2024-06-10 08:12:40.995292] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:17:19.296 [2024-06-10 08:12:40.995350] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:19.296 [2024-06-10 08:12:41.002346] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc86c30 was disconnected and freed. delete nvme_qpair. 00:17:19.296 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:19.296 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:19.296 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:19.296 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.296 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:19.296 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:19.296 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:19.556 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.556 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:19.556 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:19.556 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77697 00:17:19.556 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 77697 ']' 00:17:19.556 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 77697 00:17:19.556 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:17:19.556 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:19.556 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 77697 00:17:19.556 killing process with pid 77697 00:17:19.556 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:19.556 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:19.556 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 77697' 00:17:19.556 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 77697 00:17:19.556 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 77697 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:19.816 rmmod nvme_tcp 00:17:19.816 rmmod nvme_fabrics 00:17:19.816 rmmod nvme_keyring 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 77665 ']' 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 77665 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 77665 ']' 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 77665 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 77665 00:17:19.816 killing process with pid 77665 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 77665' 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 77665 00:17:19.816 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 77665 00:17:20.075 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:20.075 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:20.075 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:20.075 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.075 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:20.075 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.075 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.075 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.075 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:20.075 ************************************ 00:17:20.075 END TEST nvmf_discovery_remove_ifc 00:17:20.075 ************************************ 00:17:20.075 00:17:20.075 real 0m14.204s 00:17:20.075 user 0m24.602s 00:17:20.075 sys 0m2.538s 00:17:20.075 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:20.075 08:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:20.075 08:12:41 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:20.075 08:12:41 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:20.075 08:12:41 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:20.075 08:12:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:20.075 ************************************ 00:17:20.075 START TEST nvmf_identify_kernel_target 00:17:20.075 ************************************ 00:17:20.075 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:20.334 * Looking for test storage... 00:17:20.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:20.335 08:12:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:20.335 Cannot find device "nvmf_tgt_br" 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:20.335 Cannot find device "nvmf_tgt_br2" 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:20.335 Cannot find device "nvmf_tgt_br" 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:20.335 Cannot find device "nvmf_tgt_br2" 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:20.335 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:20.335 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:20.335 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:20.336 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:20.336 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:20.594 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:20.594 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:20.594 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:20.594 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:20.594 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:20.594 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:20.594 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:20.594 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:20.594 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:20.594 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:20.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:17:20.594 00:17:20.594 --- 10.0.0.2 ping statistics --- 00:17:20.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.594 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:17:20.594 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:20.594 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:20.594 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:17:20.594 00:17:20.594 --- 10.0.0.3 ping statistics --- 00:17:20.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.594 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:20.594 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:20.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:17:20.594 00:17:20.594 --- 10.0.0.1 ping statistics --- 00:17:20.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.594 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:17:20.594 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.594 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:17:20.594 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:20.594 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.594 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:20.594 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:20.594 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.594 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:20.595 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:20.852 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:20.852 Waiting for block devices as requested 00:17:21.110 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:21.110 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:21.110 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:21.110 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:21.110 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:21.110 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:17:21.110 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:21.110 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:17:21.110 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:21.110 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:21.110 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:21.110 No valid GPT data, bailing 00:17:21.111 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:21.370 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:21.370 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:21.370 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:21.370 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:21.370 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:21.370 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:21.370 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n2 00:17:21.370 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:21.370 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:17:21.370 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:21.370 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:21.370 08:12:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:21.370 No valid GPT data, bailing 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n3 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:21.370 No valid GPT data, bailing 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:21.370 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:21.370 No valid GPT data, bailing 00:17:21.371 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:21.371 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:21.371 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:21.371 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:21.371 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:21.371 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:21.371 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:21.371 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:21.371 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:21.371 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:17:21.371 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:21.371 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:17:21.371 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:21.371 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:17:21.371 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:17:21.371 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:17:21.371 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:21.629 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid=0b063e5e-64f6-4b4f-b15f-bd51b74609ab -a 10.0.0.1 -t tcp -s 4420 00:17:21.629 00:17:21.629 Discovery Log Number of Records 2, Generation counter 2 00:17:21.629 =====Discovery Log Entry 0====== 00:17:21.629 trtype: tcp 00:17:21.629 adrfam: ipv4 00:17:21.629 subtype: current discovery subsystem 00:17:21.629 treq: not specified, sq flow control disable supported 00:17:21.629 portid: 1 00:17:21.629 trsvcid: 4420 00:17:21.629 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:21.629 traddr: 10.0.0.1 00:17:21.629 eflags: none 00:17:21.629 sectype: none 00:17:21.629 =====Discovery Log Entry 1====== 00:17:21.629 trtype: tcp 00:17:21.629 adrfam: ipv4 00:17:21.629 subtype: nvme subsystem 00:17:21.629 treq: not specified, sq flow control disable supported 00:17:21.629 portid: 1 00:17:21.629 trsvcid: 4420 00:17:21.629 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:21.629 traddr: 10.0.0.1 00:17:21.629 eflags: none 00:17:21.629 sectype: none 00:17:21.629 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:21.629 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:21.629 ===================================================== 00:17:21.629 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:21.629 ===================================================== 00:17:21.629 Controller Capabilities/Features 00:17:21.629 ================================ 00:17:21.629 Vendor ID: 0000 00:17:21.629 Subsystem Vendor ID: 0000 00:17:21.629 Serial Number: 5ea2c0775b1a84cf19fa 00:17:21.629 Model Number: Linux 00:17:21.629 Firmware Version: 6.7.0-68 00:17:21.629 Recommended Arb Burst: 0 00:17:21.629 IEEE OUI Identifier: 00 00 00 00:17:21.629 Multi-path I/O 00:17:21.629 May have multiple subsystem ports: No 00:17:21.629 May have multiple controllers: No 00:17:21.629 Associated with SR-IOV VF: No 00:17:21.629 Max Data Transfer Size: Unlimited 00:17:21.629 Max Number of Namespaces: 0 00:17:21.629 Max Number of I/O Queues: 1024 00:17:21.629 NVMe Specification Version (VS): 1.3 00:17:21.629 NVMe Specification Version (Identify): 1.3 00:17:21.629 Maximum Queue Entries: 1024 00:17:21.629 Contiguous Queues Required: No 00:17:21.629 Arbitration Mechanisms Supported 00:17:21.629 Weighted Round Robin: Not Supported 00:17:21.629 Vendor Specific: Not Supported 00:17:21.629 Reset Timeout: 7500 ms 00:17:21.629 Doorbell Stride: 4 bytes 00:17:21.629 NVM Subsystem Reset: Not Supported 00:17:21.629 Command Sets Supported 00:17:21.629 NVM Command Set: Supported 00:17:21.629 Boot Partition: Not Supported 00:17:21.629 Memory Page Size Minimum: 4096 bytes 00:17:21.629 Memory Page Size Maximum: 4096 bytes 00:17:21.629 Persistent Memory Region: Not Supported 00:17:21.629 Optional Asynchronous Events Supported 00:17:21.629 Namespace Attribute Notices: Not Supported 00:17:21.629 Firmware Activation Notices: Not Supported 00:17:21.629 ANA Change Notices: Not Supported 00:17:21.629 PLE Aggregate Log Change Notices: Not Supported 00:17:21.629 LBA Status Info Alert Notices: Not Supported 00:17:21.629 EGE Aggregate Log Change Notices: Not Supported 00:17:21.629 Normal NVM Subsystem Shutdown event: Not Supported 00:17:21.629 Zone Descriptor Change Notices: Not Supported 00:17:21.629 Discovery Log Change Notices: Supported 00:17:21.629 Controller Attributes 00:17:21.629 128-bit Host Identifier: Not Supported 00:17:21.629 Non-Operational Permissive Mode: Not Supported 00:17:21.629 NVM Sets: Not Supported 00:17:21.629 Read Recovery Levels: Not Supported 00:17:21.629 Endurance Groups: Not Supported 00:17:21.629 Predictable Latency Mode: Not Supported 00:17:21.629 Traffic Based Keep ALive: Not Supported 00:17:21.629 Namespace Granularity: Not Supported 00:17:21.629 SQ Associations: Not Supported 00:17:21.629 UUID List: Not Supported 00:17:21.629 Multi-Domain Subsystem: Not Supported 00:17:21.629 Fixed Capacity Management: Not Supported 00:17:21.629 Variable Capacity Management: Not Supported 00:17:21.629 Delete Endurance Group: Not Supported 00:17:21.629 Delete NVM Set: Not Supported 00:17:21.629 Extended LBA Formats Supported: Not Supported 00:17:21.629 Flexible Data Placement Supported: Not Supported 00:17:21.629 00:17:21.629 Controller Memory Buffer Support 00:17:21.629 ================================ 00:17:21.629 Supported: No 00:17:21.629 00:17:21.629 Persistent Memory Region Support 00:17:21.629 ================================ 00:17:21.629 Supported: No 00:17:21.629 00:17:21.629 Admin Command Set Attributes 00:17:21.629 ============================ 00:17:21.629 Security Send/Receive: Not Supported 00:17:21.629 Format NVM: Not Supported 00:17:21.629 Firmware Activate/Download: Not Supported 00:17:21.629 Namespace Management: Not Supported 00:17:21.629 Device Self-Test: Not Supported 00:17:21.629 Directives: Not Supported 00:17:21.629 NVMe-MI: Not Supported 00:17:21.629 Virtualization Management: Not Supported 00:17:21.629 Doorbell Buffer Config: Not Supported 00:17:21.629 Get LBA Status Capability: Not Supported 00:17:21.629 Command & Feature Lockdown Capability: Not Supported 00:17:21.629 Abort Command Limit: 1 00:17:21.629 Async Event Request Limit: 1 00:17:21.629 Number of Firmware Slots: N/A 00:17:21.629 Firmware Slot 1 Read-Only: N/A 00:17:21.629 Firmware Activation Without Reset: N/A 00:17:21.629 Multiple Update Detection Support: N/A 00:17:21.629 Firmware Update Granularity: No Information Provided 00:17:21.629 Per-Namespace SMART Log: No 00:17:21.630 Asymmetric Namespace Access Log Page: Not Supported 00:17:21.630 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:21.630 Command Effects Log Page: Not Supported 00:17:21.630 Get Log Page Extended Data: Supported 00:17:21.630 Telemetry Log Pages: Not Supported 00:17:21.630 Persistent Event Log Pages: Not Supported 00:17:21.630 Supported Log Pages Log Page: May Support 00:17:21.630 Commands Supported & Effects Log Page: Not Supported 00:17:21.630 Feature Identifiers & Effects Log Page:May Support 00:17:21.630 NVMe-MI Commands & Effects Log Page: May Support 00:17:21.630 Data Area 4 for Telemetry Log: Not Supported 00:17:21.630 Error Log Page Entries Supported: 1 00:17:21.630 Keep Alive: Not Supported 00:17:21.630 00:17:21.630 NVM Command Set Attributes 00:17:21.630 ========================== 00:17:21.630 Submission Queue Entry Size 00:17:21.630 Max: 1 00:17:21.630 Min: 1 00:17:21.630 Completion Queue Entry Size 00:17:21.630 Max: 1 00:17:21.630 Min: 1 00:17:21.630 Number of Namespaces: 0 00:17:21.630 Compare Command: Not Supported 00:17:21.630 Write Uncorrectable Command: Not Supported 00:17:21.630 Dataset Management Command: Not Supported 00:17:21.630 Write Zeroes Command: Not Supported 00:17:21.630 Set Features Save Field: Not Supported 00:17:21.630 Reservations: Not Supported 00:17:21.630 Timestamp: Not Supported 00:17:21.630 Copy: Not Supported 00:17:21.630 Volatile Write Cache: Not Present 00:17:21.630 Atomic Write Unit (Normal): 1 00:17:21.630 Atomic Write Unit (PFail): 1 00:17:21.630 Atomic Compare & Write Unit: 1 00:17:21.630 Fused Compare & Write: Not Supported 00:17:21.630 Scatter-Gather List 00:17:21.630 SGL Command Set: Supported 00:17:21.630 SGL Keyed: Not Supported 00:17:21.630 SGL Bit Bucket Descriptor: Not Supported 00:17:21.630 SGL Metadata Pointer: Not Supported 00:17:21.630 Oversized SGL: Not Supported 00:17:21.630 SGL Metadata Address: Not Supported 00:17:21.630 SGL Offset: Supported 00:17:21.630 Transport SGL Data Block: Not Supported 00:17:21.630 Replay Protected Memory Block: Not Supported 00:17:21.630 00:17:21.630 Firmware Slot Information 00:17:21.630 ========================= 00:17:21.630 Active slot: 0 00:17:21.630 00:17:21.630 00:17:21.630 Error Log 00:17:21.630 ========= 00:17:21.630 00:17:21.630 Active Namespaces 00:17:21.630 ================= 00:17:21.630 Discovery Log Page 00:17:21.630 ================== 00:17:21.630 Generation Counter: 2 00:17:21.630 Number of Records: 2 00:17:21.630 Record Format: 0 00:17:21.630 00:17:21.630 Discovery Log Entry 0 00:17:21.630 ---------------------- 00:17:21.630 Transport Type: 3 (TCP) 00:17:21.630 Address Family: 1 (IPv4) 00:17:21.630 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:21.630 Entry Flags: 00:17:21.630 Duplicate Returned Information: 0 00:17:21.630 Explicit Persistent Connection Support for Discovery: 0 00:17:21.630 Transport Requirements: 00:17:21.630 Secure Channel: Not Specified 00:17:21.630 Port ID: 1 (0x0001) 00:17:21.630 Controller ID: 65535 (0xffff) 00:17:21.630 Admin Max SQ Size: 32 00:17:21.630 Transport Service Identifier: 4420 00:17:21.630 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:21.630 Transport Address: 10.0.0.1 00:17:21.630 Discovery Log Entry 1 00:17:21.630 ---------------------- 00:17:21.630 Transport Type: 3 (TCP) 00:17:21.630 Address Family: 1 (IPv4) 00:17:21.630 Subsystem Type: 2 (NVM Subsystem) 00:17:21.630 Entry Flags: 00:17:21.630 Duplicate Returned Information: 0 00:17:21.630 Explicit Persistent Connection Support for Discovery: 0 00:17:21.630 Transport Requirements: 00:17:21.630 Secure Channel: Not Specified 00:17:21.630 Port ID: 1 (0x0001) 00:17:21.630 Controller ID: 65535 (0xffff) 00:17:21.630 Admin Max SQ Size: 32 00:17:21.630 Transport Service Identifier: 4420 00:17:21.630 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:21.630 Transport Address: 10.0.0.1 00:17:21.630 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:21.889 get_feature(0x01) failed 00:17:21.889 get_feature(0x02) failed 00:17:21.889 get_feature(0x04) failed 00:17:21.889 ===================================================== 00:17:21.889 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:21.889 ===================================================== 00:17:21.889 Controller Capabilities/Features 00:17:21.889 ================================ 00:17:21.889 Vendor ID: 0000 00:17:21.889 Subsystem Vendor ID: 0000 00:17:21.889 Serial Number: 201a3981fe0bd70de196 00:17:21.889 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:21.889 Firmware Version: 6.7.0-68 00:17:21.889 Recommended Arb Burst: 6 00:17:21.889 IEEE OUI Identifier: 00 00 00 00:17:21.889 Multi-path I/O 00:17:21.889 May have multiple subsystem ports: Yes 00:17:21.889 May have multiple controllers: Yes 00:17:21.889 Associated with SR-IOV VF: No 00:17:21.889 Max Data Transfer Size: Unlimited 00:17:21.889 Max Number of Namespaces: 1024 00:17:21.889 Max Number of I/O Queues: 128 00:17:21.889 NVMe Specification Version (VS): 1.3 00:17:21.889 NVMe Specification Version (Identify): 1.3 00:17:21.889 Maximum Queue Entries: 1024 00:17:21.889 Contiguous Queues Required: No 00:17:21.889 Arbitration Mechanisms Supported 00:17:21.889 Weighted Round Robin: Not Supported 00:17:21.889 Vendor Specific: Not Supported 00:17:21.889 Reset Timeout: 7500 ms 00:17:21.889 Doorbell Stride: 4 bytes 00:17:21.889 NVM Subsystem Reset: Not Supported 00:17:21.889 Command Sets Supported 00:17:21.889 NVM Command Set: Supported 00:17:21.889 Boot Partition: Not Supported 00:17:21.889 Memory Page Size Minimum: 4096 bytes 00:17:21.889 Memory Page Size Maximum: 4096 bytes 00:17:21.889 Persistent Memory Region: Not Supported 00:17:21.889 Optional Asynchronous Events Supported 00:17:21.889 Namespace Attribute Notices: Supported 00:17:21.889 Firmware Activation Notices: Not Supported 00:17:21.889 ANA Change Notices: Supported 00:17:21.889 PLE Aggregate Log Change Notices: Not Supported 00:17:21.889 LBA Status Info Alert Notices: Not Supported 00:17:21.889 EGE Aggregate Log Change Notices: Not Supported 00:17:21.889 Normal NVM Subsystem Shutdown event: Not Supported 00:17:21.889 Zone Descriptor Change Notices: Not Supported 00:17:21.889 Discovery Log Change Notices: Not Supported 00:17:21.889 Controller Attributes 00:17:21.889 128-bit Host Identifier: Supported 00:17:21.889 Non-Operational Permissive Mode: Not Supported 00:17:21.889 NVM Sets: Not Supported 00:17:21.889 Read Recovery Levels: Not Supported 00:17:21.889 Endurance Groups: Not Supported 00:17:21.889 Predictable Latency Mode: Not Supported 00:17:21.889 Traffic Based Keep ALive: Supported 00:17:21.889 Namespace Granularity: Not Supported 00:17:21.889 SQ Associations: Not Supported 00:17:21.889 UUID List: Not Supported 00:17:21.889 Multi-Domain Subsystem: Not Supported 00:17:21.889 Fixed Capacity Management: Not Supported 00:17:21.889 Variable Capacity Management: Not Supported 00:17:21.889 Delete Endurance Group: Not Supported 00:17:21.889 Delete NVM Set: Not Supported 00:17:21.889 Extended LBA Formats Supported: Not Supported 00:17:21.889 Flexible Data Placement Supported: Not Supported 00:17:21.889 00:17:21.889 Controller Memory Buffer Support 00:17:21.889 ================================ 00:17:21.889 Supported: No 00:17:21.889 00:17:21.889 Persistent Memory Region Support 00:17:21.889 ================================ 00:17:21.889 Supported: No 00:17:21.889 00:17:21.889 Admin Command Set Attributes 00:17:21.889 ============================ 00:17:21.889 Security Send/Receive: Not Supported 00:17:21.889 Format NVM: Not Supported 00:17:21.889 Firmware Activate/Download: Not Supported 00:17:21.889 Namespace Management: Not Supported 00:17:21.889 Device Self-Test: Not Supported 00:17:21.889 Directives: Not Supported 00:17:21.889 NVMe-MI: Not Supported 00:17:21.890 Virtualization Management: Not Supported 00:17:21.890 Doorbell Buffer Config: Not Supported 00:17:21.890 Get LBA Status Capability: Not Supported 00:17:21.890 Command & Feature Lockdown Capability: Not Supported 00:17:21.890 Abort Command Limit: 4 00:17:21.890 Async Event Request Limit: 4 00:17:21.890 Number of Firmware Slots: N/A 00:17:21.890 Firmware Slot 1 Read-Only: N/A 00:17:21.890 Firmware Activation Without Reset: N/A 00:17:21.890 Multiple Update Detection Support: N/A 00:17:21.890 Firmware Update Granularity: No Information Provided 00:17:21.890 Per-Namespace SMART Log: Yes 00:17:21.890 Asymmetric Namespace Access Log Page: Supported 00:17:21.890 ANA Transition Time : 10 sec 00:17:21.890 00:17:21.890 Asymmetric Namespace Access Capabilities 00:17:21.890 ANA Optimized State : Supported 00:17:21.890 ANA Non-Optimized State : Supported 00:17:21.890 ANA Inaccessible State : Supported 00:17:21.890 ANA Persistent Loss State : Supported 00:17:21.890 ANA Change State : Supported 00:17:21.890 ANAGRPID is not changed : No 00:17:21.890 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:21.890 00:17:21.890 ANA Group Identifier Maximum : 128 00:17:21.890 Number of ANA Group Identifiers : 128 00:17:21.890 Max Number of Allowed Namespaces : 1024 00:17:21.890 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:21.890 Command Effects Log Page: Supported 00:17:21.890 Get Log Page Extended Data: Supported 00:17:21.890 Telemetry Log Pages: Not Supported 00:17:21.890 Persistent Event Log Pages: Not Supported 00:17:21.890 Supported Log Pages Log Page: May Support 00:17:21.890 Commands Supported & Effects Log Page: Not Supported 00:17:21.890 Feature Identifiers & Effects Log Page:May Support 00:17:21.890 NVMe-MI Commands & Effects Log Page: May Support 00:17:21.890 Data Area 4 for Telemetry Log: Not Supported 00:17:21.890 Error Log Page Entries Supported: 128 00:17:21.890 Keep Alive: Supported 00:17:21.890 Keep Alive Granularity: 1000 ms 00:17:21.890 00:17:21.890 NVM Command Set Attributes 00:17:21.890 ========================== 00:17:21.890 Submission Queue Entry Size 00:17:21.890 Max: 64 00:17:21.890 Min: 64 00:17:21.890 Completion Queue Entry Size 00:17:21.890 Max: 16 00:17:21.890 Min: 16 00:17:21.890 Number of Namespaces: 1024 00:17:21.890 Compare Command: Not Supported 00:17:21.890 Write Uncorrectable Command: Not Supported 00:17:21.890 Dataset Management Command: Supported 00:17:21.890 Write Zeroes Command: Supported 00:17:21.890 Set Features Save Field: Not Supported 00:17:21.890 Reservations: Not Supported 00:17:21.890 Timestamp: Not Supported 00:17:21.890 Copy: Not Supported 00:17:21.890 Volatile Write Cache: Present 00:17:21.890 Atomic Write Unit (Normal): 1 00:17:21.890 Atomic Write Unit (PFail): 1 00:17:21.890 Atomic Compare & Write Unit: 1 00:17:21.890 Fused Compare & Write: Not Supported 00:17:21.890 Scatter-Gather List 00:17:21.890 SGL Command Set: Supported 00:17:21.890 SGL Keyed: Not Supported 00:17:21.890 SGL Bit Bucket Descriptor: Not Supported 00:17:21.890 SGL Metadata Pointer: Not Supported 00:17:21.890 Oversized SGL: Not Supported 00:17:21.890 SGL Metadata Address: Not Supported 00:17:21.890 SGL Offset: Supported 00:17:21.890 Transport SGL Data Block: Not Supported 00:17:21.890 Replay Protected Memory Block: Not Supported 00:17:21.890 00:17:21.890 Firmware Slot Information 00:17:21.890 ========================= 00:17:21.890 Active slot: 0 00:17:21.890 00:17:21.890 Asymmetric Namespace Access 00:17:21.890 =========================== 00:17:21.890 Change Count : 0 00:17:21.890 Number of ANA Group Descriptors : 1 00:17:21.890 ANA Group Descriptor : 0 00:17:21.890 ANA Group ID : 1 00:17:21.890 Number of NSID Values : 1 00:17:21.890 Change Count : 0 00:17:21.890 ANA State : 1 00:17:21.890 Namespace Identifier : 1 00:17:21.890 00:17:21.890 Commands Supported and Effects 00:17:21.890 ============================== 00:17:21.890 Admin Commands 00:17:21.890 -------------- 00:17:21.890 Get Log Page (02h): Supported 00:17:21.890 Identify (06h): Supported 00:17:21.890 Abort (08h): Supported 00:17:21.890 Set Features (09h): Supported 00:17:21.890 Get Features (0Ah): Supported 00:17:21.890 Asynchronous Event Request (0Ch): Supported 00:17:21.890 Keep Alive (18h): Supported 00:17:21.890 I/O Commands 00:17:21.890 ------------ 00:17:21.890 Flush (00h): Supported 00:17:21.890 Write (01h): Supported LBA-Change 00:17:21.890 Read (02h): Supported 00:17:21.890 Write Zeroes (08h): Supported LBA-Change 00:17:21.890 Dataset Management (09h): Supported 00:17:21.890 00:17:21.890 Error Log 00:17:21.890 ========= 00:17:21.890 Entry: 0 00:17:21.890 Error Count: 0x3 00:17:21.890 Submission Queue Id: 0x0 00:17:21.890 Command Id: 0x5 00:17:21.890 Phase Bit: 0 00:17:21.890 Status Code: 0x2 00:17:21.890 Status Code Type: 0x0 00:17:21.890 Do Not Retry: 1 00:17:21.890 Error Location: 0x28 00:17:21.890 LBA: 0x0 00:17:21.890 Namespace: 0x0 00:17:21.890 Vendor Log Page: 0x0 00:17:21.890 ----------- 00:17:21.890 Entry: 1 00:17:21.890 Error Count: 0x2 00:17:21.890 Submission Queue Id: 0x0 00:17:21.890 Command Id: 0x5 00:17:21.890 Phase Bit: 0 00:17:21.890 Status Code: 0x2 00:17:21.890 Status Code Type: 0x0 00:17:21.890 Do Not Retry: 1 00:17:21.890 Error Location: 0x28 00:17:21.890 LBA: 0x0 00:17:21.890 Namespace: 0x0 00:17:21.890 Vendor Log Page: 0x0 00:17:21.890 ----------- 00:17:21.890 Entry: 2 00:17:21.890 Error Count: 0x1 00:17:21.890 Submission Queue Id: 0x0 00:17:21.890 Command Id: 0x4 00:17:21.890 Phase Bit: 0 00:17:21.890 Status Code: 0x2 00:17:21.890 Status Code Type: 0x0 00:17:21.890 Do Not Retry: 1 00:17:21.890 Error Location: 0x28 00:17:21.890 LBA: 0x0 00:17:21.890 Namespace: 0x0 00:17:21.890 Vendor Log Page: 0x0 00:17:21.890 00:17:21.890 Number of Queues 00:17:21.890 ================ 00:17:21.890 Number of I/O Submission Queues: 128 00:17:21.890 Number of I/O Completion Queues: 128 00:17:21.890 00:17:21.890 ZNS Specific Controller Data 00:17:21.890 ============================ 00:17:21.890 Zone Append Size Limit: 0 00:17:21.890 00:17:21.890 00:17:21.890 Active Namespaces 00:17:21.890 ================= 00:17:21.890 get_feature(0x05) failed 00:17:21.890 Namespace ID:1 00:17:21.890 Command Set Identifier: NVM (00h) 00:17:21.890 Deallocate: Supported 00:17:21.890 Deallocated/Unwritten Error: Not Supported 00:17:21.890 Deallocated Read Value: Unknown 00:17:21.890 Deallocate in Write Zeroes: Not Supported 00:17:21.890 Deallocated Guard Field: 0xFFFF 00:17:21.890 Flush: Supported 00:17:21.890 Reservation: Not Supported 00:17:21.890 Namespace Sharing Capabilities: Multiple Controllers 00:17:21.890 Size (in LBAs): 1310720 (5GiB) 00:17:21.890 Capacity (in LBAs): 1310720 (5GiB) 00:17:21.890 Utilization (in LBAs): 1310720 (5GiB) 00:17:21.890 UUID: 41cdb866-0c7f-4788-b4fb-3c891ea2dbda 00:17:21.890 Thin Provisioning: Not Supported 00:17:21.890 Per-NS Atomic Units: Yes 00:17:21.890 Atomic Boundary Size (Normal): 0 00:17:21.890 Atomic Boundary Size (PFail): 0 00:17:21.890 Atomic Boundary Offset: 0 00:17:21.890 NGUID/EUI64 Never Reused: No 00:17:21.890 ANA group ID: 1 00:17:21.890 Namespace Write Protected: No 00:17:21.890 Number of LBA Formats: 1 00:17:21.890 Current LBA Format: LBA Format #00 00:17:21.890 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:21.890 00:17:21.890 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:21.890 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:21.890 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:17:21.890 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:21.890 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:17:21.890 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:21.890 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:21.890 rmmod nvme_tcp 00:17:21.890 rmmod nvme_fabrics 00:17:21.890 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:21.890 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:17:21.890 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:17:21.890 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:21.890 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:21.890 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:21.890 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:21.890 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:21.891 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:21.891 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.891 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.891 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.891 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:21.891 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:21.891 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:21.891 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:17:22.149 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:22.149 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:22.149 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:22.149 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:22.149 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:22.149 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:22.149 08:12:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:22.717 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:22.717 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:22.976 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:22.976 ************************************ 00:17:22.976 END TEST nvmf_identify_kernel_target 00:17:22.976 ************************************ 00:17:22.976 00:17:22.976 real 0m2.810s 00:17:22.976 user 0m0.988s 00:17:22.976 sys 0m1.318s 00:17:22.976 08:12:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:22.976 08:12:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.976 08:12:44 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:22.976 08:12:44 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:22.976 08:12:44 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:22.976 08:12:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:22.976 ************************************ 00:17:22.976 START TEST nvmf_auth_host 00:17:22.976 ************************************ 00:17:22.976 08:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:22.976 * Looking for test storage... 00:17:22.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:22.976 08:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:22.976 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:22.976 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.976 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.976 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.976 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.976 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.976 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.976 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.976 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.976 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.976 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:23.235 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:23.236 Cannot find device "nvmf_tgt_br" 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:23.236 Cannot find device "nvmf_tgt_br2" 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:23.236 Cannot find device "nvmf_tgt_br" 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:23.236 Cannot find device "nvmf_tgt_br2" 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:23.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:23.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:23.236 08:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:23.236 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:23.236 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:23.236 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:23.236 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:23.236 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:23.236 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:23.236 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:23.236 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:23.236 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:23.236 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:23.236 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:23.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:23.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:17:23.498 00:17:23.498 --- 10.0.0.2 ping statistics --- 00:17:23.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.498 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:23.498 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:23.498 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:17:23.498 00:17:23.498 --- 10.0.0.3 ping statistics --- 00:17:23.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.498 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:23.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:23.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:17:23.498 00:17:23.498 --- 10.0.0.1 ping statistics --- 00:17:23.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.498 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=78577 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 78577 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 78577 ']' 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:23.498 08:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.438 08:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:24.438 08:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:17:24.438 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:24.438 08:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:24.438 08:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.697 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.697 08:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=93095607f01bceb77e0ba59b2917ce75 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.k1U 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 93095607f01bceb77e0ba59b2917ce75 0 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 93095607f01bceb77e0ba59b2917ce75 0 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=93095607f01bceb77e0ba59b2917ce75 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.k1U 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.k1U 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.k1U 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=21906be97b12566cb873f09819453c698838d062b7eafe77a467ea714df47602 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.izV 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 21906be97b12566cb873f09819453c698838d062b7eafe77a467ea714df47602 3 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 21906be97b12566cb873f09819453c698838d062b7eafe77a467ea714df47602 3 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=21906be97b12566cb873f09819453c698838d062b7eafe77a467ea714df47602 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.izV 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.izV 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.izV 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e42f8e2fd4ff174448df28ebe25608467d191aad745cd08a 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.RPY 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e42f8e2fd4ff174448df28ebe25608467d191aad745cd08a 0 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e42f8e2fd4ff174448df28ebe25608467d191aad745cd08a 0 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e42f8e2fd4ff174448df28ebe25608467d191aad745cd08a 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.RPY 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.RPY 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.RPY 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=634397fc9982b684917221d00ffa41b284e7a5a55a2021eb 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.jTC 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 634397fc9982b684917221d00ffa41b284e7a5a55a2021eb 2 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 634397fc9982b684917221d00ffa41b284e7a5a55a2021eb 2 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=634397fc9982b684917221d00ffa41b284e7a5a55a2021eb 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:24.698 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.jTC 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.jTC 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.jTC 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=444bb124aedf375a7a5967998dd741e6 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Uld 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 444bb124aedf375a7a5967998dd741e6 1 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 444bb124aedf375a7a5967998dd741e6 1 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=444bb124aedf375a7a5967998dd741e6 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Uld 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Uld 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Uld 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a28060f9d69c1631f8df5113e37d8501 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.bEI 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a28060f9d69c1631f8df5113e37d8501 1 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a28060f9d69c1631f8df5113e37d8501 1 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a28060f9d69c1631f8df5113e37d8501 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.bEI 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.bEI 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.bEI 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:24.957 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e73b7a754cbb27698c684c71655109df20cd9fc563775891 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.UyM 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e73b7a754cbb27698c684c71655109df20cd9fc563775891 2 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e73b7a754cbb27698c684c71655109df20cd9fc563775891 2 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e73b7a754cbb27698c684c71655109df20cd9fc563775891 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.UyM 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.UyM 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.UyM 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a85ed92a6dfde680047fea9b107e8d83 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ZI7 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a85ed92a6dfde680047fea9b107e8d83 0 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a85ed92a6dfde680047fea9b107e8d83 0 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a85ed92a6dfde680047fea9b107e8d83 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:24.958 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ZI7 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ZI7 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ZI7 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b6c57a09015cac06b81a79afdc9165bddf801e592d40c6c5b93453abf4fac46c 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.UOt 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b6c57a09015cac06b81a79afdc9165bddf801e592d40c6c5b93453abf4fac46c 3 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b6c57a09015cac06b81a79afdc9165bddf801e592d40c6c5b93453abf4fac46c 3 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b6c57a09015cac06b81a79afdc9165bddf801e592d40c6c5b93453abf4fac46c 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.UOt 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.UOt 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.UOt 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78577 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 78577 ']' 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:25.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:25.217 08:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.k1U 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.izV ]] 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.izV 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.RPY 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.jTC ]] 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jTC 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Uld 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.bEI ]] 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bEI 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.UyM 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ZI7 ]] 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ZI7 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.UOt 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:25.476 08:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:26.043 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:26.043 Waiting for block devices as requested 00:17:26.043 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:26.043 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:26.610 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:26.610 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:26.610 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:26.610 08:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:17:26.610 08:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:26.610 08:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:17:26.610 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:26.610 08:12:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:26.610 08:12:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:26.870 No valid GPT data, bailing 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n2 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:26.870 No valid GPT data, bailing 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n3 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:26.870 No valid GPT data, bailing 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:26.870 No valid GPT data, bailing 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:26.870 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid=0b063e5e-64f6-4b4f-b15f-bd51b74609ab -a 10.0.0.1 -t tcp -s 4420 00:17:27.129 00:17:27.129 Discovery Log Number of Records 2, Generation counter 2 00:17:27.129 =====Discovery Log Entry 0====== 00:17:27.129 trtype: tcp 00:17:27.129 adrfam: ipv4 00:17:27.129 subtype: current discovery subsystem 00:17:27.129 treq: not specified, sq flow control disable supported 00:17:27.129 portid: 1 00:17:27.129 trsvcid: 4420 00:17:27.129 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:27.129 traddr: 10.0.0.1 00:17:27.129 eflags: none 00:17:27.129 sectype: none 00:17:27.129 =====Discovery Log Entry 1====== 00:17:27.129 trtype: tcp 00:17:27.129 adrfam: ipv4 00:17:27.129 subtype: nvme subsystem 00:17:27.129 treq: not specified, sq flow control disable supported 00:17:27.129 portid: 1 00:17:27.129 trsvcid: 4420 00:17:27.129 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:27.129 traddr: 10.0.0.1 00:17:27.129 eflags: none 00:17:27.129 sectype: none 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: ]] 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.129 08:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.388 nvme0n1 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: ]] 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.388 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.389 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.389 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.389 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.389 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.389 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.389 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.389 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.389 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.389 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.389 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.389 nvme0n1 00:17:27.389 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.389 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.389 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.389 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.389 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.389 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.389 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.389 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.389 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.389 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: ]] 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.648 nvme0n1 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:27.648 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: ]] 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.649 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.908 nvme0n1 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: ]] 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.908 nvme0n1 00:17:27.908 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.909 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.909 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.909 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.909 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.909 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.909 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.909 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.909 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.909 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.168 nvme0n1 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:28.168 08:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: ]] 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.427 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.428 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.428 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.428 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.687 nvme0n1 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: ]] 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.687 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.946 nvme0n1 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: ]] 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.946 nvme0n1 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.946 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: ]] 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.206 nvme0n1 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.206 08:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.206 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.206 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.206 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.206 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.206 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.206 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.206 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.207 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.466 nvme0n1 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:29.466 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: ]] 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.032 08:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.033 08:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.033 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.033 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.033 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.292 nvme0n1 00:17:30.292 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.292 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.292 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.292 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.292 08:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.292 08:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: ]] 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.292 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.551 nvme0n1 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: ]] 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.551 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.552 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.552 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.552 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.552 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.552 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.552 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.552 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.552 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.552 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.552 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.552 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.810 nvme0n1 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: ]] 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.810 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.811 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.811 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.811 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.811 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.811 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:30.811 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.811 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.069 nvme0n1 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.069 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:31.070 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:31.070 08:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:31.070 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:31.070 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.070 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.327 nvme0n1 00:17:31.327 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.327 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.327 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.327 08:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.327 08:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.327 08:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.327 08:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.327 08:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.327 08:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.327 08:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.327 08:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.327 08:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.327 08:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.327 08:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:31.327 08:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.327 08:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:31.327 08:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:31.327 08:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:31.327 08:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:31.327 08:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:31.327 08:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:31.327 08:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:33.228 08:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:33.228 08:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: ]] 00:17:33.228 08:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:33.228 08:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:33.228 08:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.228 08:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:33.228 08:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:33.228 08:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:33.228 08:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.228 08:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:33.228 08:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.228 08:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.228 08:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.228 08:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.228 08:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.228 08:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.228 08:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.228 08:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.228 08:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.228 08:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.229 08:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.229 08:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.229 08:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.229 08:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.229 08:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.229 08:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.229 08:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.229 nvme0n1 00:17:33.229 08:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.229 08:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.229 08:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.229 08:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.229 08:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.229 08:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: ]] 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.229 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.795 nvme0n1 00:17:33.795 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.795 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.795 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.795 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.795 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.795 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.795 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.795 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.795 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.795 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.795 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.795 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.795 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:33.795 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.795 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:33.795 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:33.795 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:33.795 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: ]] 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.796 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.055 nvme0n1 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: ]] 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.055 08:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.312 nvme0n1 00:17:34.312 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.312 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.312 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.312 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.312 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.312 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.613 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.880 nvme0n1 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: ]] 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.880 08:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.881 08:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.881 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.881 08:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.487 nvme0n1 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: ]] 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:35.487 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.055 nvme0n1 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: ]] 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.055 08:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.623 nvme0n1 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: ]] 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.623 08:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.189 nvme0n1 00:17:37.189 08:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.189 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.189 08:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.189 08:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.189 08:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.189 08:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.189 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.189 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.189 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.190 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.757 nvme0n1 00:17:37.757 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.757 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.757 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.757 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.757 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.757 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.757 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.757 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.757 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.757 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: ]] 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.016 nvme0n1 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: ]] 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.016 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.276 nvme0n1 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: ]] 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.276 08:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.276 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:38.276 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:38.276 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.276 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.276 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.276 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.276 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.276 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.276 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.276 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.277 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.277 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.277 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.277 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.277 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.277 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.277 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.277 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.277 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.277 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.277 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.277 nvme0n1 00:17:38.277 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.277 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.277 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.277 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.277 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.277 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.536 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.536 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.536 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.536 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.536 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.536 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.536 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:38.536 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.536 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.536 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:38.536 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:38.536 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:38.536 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:38.536 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.536 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:38.536 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:38.536 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: ]] 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.537 nvme0n1 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.537 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.855 nvme0n1 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: ]] 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.855 nvme0n1 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.855 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: ]] 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:39.117 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.118 nvme0n1 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: ]] 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.118 08:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.377 nvme0n1 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: ]] 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.377 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:39.378 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.378 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.378 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.378 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.378 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.378 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.378 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.378 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.378 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.378 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.378 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.378 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.378 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.378 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.378 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:39.378 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.378 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.637 nvme0n1 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.637 nvme0n1 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.637 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: ]] 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.896 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.897 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.897 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.897 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.897 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.897 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.897 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.897 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.897 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.897 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.897 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.897 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.897 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.897 nvme0n1 00:17:39.897 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.897 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.897 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.897 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.897 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.897 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: ]] 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:40.156 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.157 nvme0n1 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.157 08:13:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: ]] 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.416 nvme0n1 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.416 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: ]] 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.674 nvme0n1 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.674 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.932 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.933 nvme0n1 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.933 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: ]] 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.192 08:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.451 nvme0n1 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:41.451 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: ]] 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.452 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.019 nvme0n1 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: ]] 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.020 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.279 nvme0n1 00:17:42.279 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.279 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.279 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.279 08:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.279 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.279 08:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: ]] 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.279 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.537 nvme0n1 00:17:42.537 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.537 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.537 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.537 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.537 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.537 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.797 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.056 nvme0n1 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: ]] 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:43.056 08:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.057 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.057 08:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.625 nvme0n1 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: ]] 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.625 08:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:43.884 08:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:43.884 08:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:43.884 08:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.884 08:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.884 08:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:43.884 08:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.884 08:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:43.884 08:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:43.884 08:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:43.884 08:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.884 08:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.884 08:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.451 nvme0n1 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: ]] 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.451 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.050 nvme0n1 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: ]] 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.050 08:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.618 nvme0n1 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.618 08:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.186 nvme0n1 00:17:46.186 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.186 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.186 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.186 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.186 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.186 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: ]] 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.446 nvme0n1 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: ]] 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.446 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.447 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.447 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.447 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.447 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.447 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.447 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.705 nvme0n1 00:17:46.705 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.705 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.705 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: ]] 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.706 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.965 nvme0n1 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: ]] 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.965 nvme0n1 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.965 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.226 nvme0n1 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.226 08:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.226 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.226 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.226 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.226 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.226 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.226 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.226 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.226 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:47.226 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.226 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.226 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:47.226 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:47.226 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:47.226 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:47.226 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.226 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:47.226 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: ]] 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.227 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.486 nvme0n1 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: ]] 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.486 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.745 nvme0n1 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: ]] 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.745 nvme0n1 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.745 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: ]] 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.004 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.005 nvme0n1 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.005 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.264 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.264 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.264 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.264 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.264 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.264 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.264 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.264 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.264 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.264 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.264 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.264 08:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.264 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:48.264 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.264 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.264 nvme0n1 00:17:48.264 08:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.264 08:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: ]] 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:48.264 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.265 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.524 nvme0n1 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: ]] 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.524 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.784 nvme0n1 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: ]] 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.784 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.044 nvme0n1 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: ]] 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.044 08:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.303 nvme0n1 00:17:49.303 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.303 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.303 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.303 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.303 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.303 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.303 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.303 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.303 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.303 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.304 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.563 nvme0n1 00:17:49.563 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.563 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.563 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.563 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.563 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.563 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.563 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.563 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.563 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.563 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: ]] 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.822 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.081 nvme0n1 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: ]] 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.081 08:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.648 nvme0n1 00:17:50.648 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.648 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.648 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.648 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.648 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.648 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: ]] 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.649 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.908 nvme0n1 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: ]] 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.908 08:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.475 nvme0n1 00:17:51.475 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.475 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.475 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.475 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.475 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.475 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.475 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.475 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.475 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.475 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.475 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.475 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.475 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:51.475 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.475 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:51.475 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:51.475 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.476 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.734 nvme0n1 00:17:51.734 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.734 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.734 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.734 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.734 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.734 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.994 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.994 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMwOTU2MDdmMDFiY2ViNzdlMGJhNTliMjkxN2NlNzXdX9Rq: 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: ]] 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE5MDZiZTk3YjEyNTY2Y2I4NzNmMDk4MTk0NTNjNjk4ODM4ZDA2MmI3ZWFmZTc3YTQ2N2VhNzE0ZGY0NzYwMmN+EWk=: 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.995 08:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.563 nvme0n1 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: ]] 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.563 08:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.131 nvme0n1 00:17:53.131 08:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.131 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.131 08:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.131 08:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.131 08:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.131 08:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ0YmIxMjRhZWRmMzc1YTdhNTk2Nzk5OGRkNzQxZTZi64n7: 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: ]] 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTI4MDYwZjlkNjljMTYzMWY4ZGY1MTEzZTM3ZDg1MDF263zn: 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.390 08:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.958 nvme0n1 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTczYjdhNzU0Y2JiMjc2OThjNjg0YzcxNjU1MTA5ZGYyMGNkOWZjNTYzNzc1ODkxIXFnLQ==: 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: ]] 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTg1ZWQ5MmE2ZGZkZTY4MDA0N2ZlYTliMTA3ZThkODMknoSR: 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.958 08:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.526 nvme0n1 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjZjNTdhMDkwMTVjYWMwNmI4MWE3OWFmZGM5MTY1YmRkZjgwMWU1OTJkNDBjNmM1YjkzNDUzYWJmNGZhYzQ2Y5d2dhM=: 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.526 08:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.094 nvme0n1 00:17:55.094 08:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.094 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.094 08:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.094 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.094 08:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.094 08:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.380 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.380 08:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.380 08:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.380 08:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.380 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.380 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:55.380 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.380 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:55.380 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:55.380 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:55.380 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:55.380 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:55.380 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:55.380 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:55.380 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTQyZjhlMmZkNGZmMTc0NDQ4ZGYyOGViZTI1NjA4NDY3ZDE5MWFhZDc0NWNkMDhhHPcL0w==: 00:17:55.380 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: ]] 00:17:55.380 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0Mzk3ZmM5OTgyYjY4NDkxNzIyMWQwMGZmYTQxYjI4NGU3YTVhNTVhMjAyMWVi3OYPSA==: 00:17:55.380 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:55.380 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.381 request: 00:17:55.381 { 00:17:55.381 "name": "nvme0", 00:17:55.381 "trtype": "tcp", 00:17:55.381 "traddr": "10.0.0.1", 00:17:55.381 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:55.381 "adrfam": "ipv4", 00:17:55.381 "trsvcid": "4420", 00:17:55.381 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:55.381 "method": "bdev_nvme_attach_controller", 00:17:55.381 "req_id": 1 00:17:55.381 } 00:17:55.381 Got JSON-RPC error response 00:17:55.381 response: 00:17:55.381 { 00:17:55.381 "code": -5, 00:17:55.381 "message": "Input/output error" 00:17:55.381 } 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.381 request: 00:17:55.381 { 00:17:55.381 "name": "nvme0", 00:17:55.381 "trtype": "tcp", 00:17:55.381 "traddr": "10.0.0.1", 00:17:55.381 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:55.381 "adrfam": "ipv4", 00:17:55.381 "trsvcid": "4420", 00:17:55.381 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:55.381 "dhchap_key": "key2", 00:17:55.381 "method": "bdev_nvme_attach_controller", 00:17:55.381 "req_id": 1 00:17:55.381 } 00:17:55.381 Got JSON-RPC error response 00:17:55.381 response: 00:17:55.381 { 00:17:55.381 "code": -5, 00:17:55.381 "message": "Input/output error" 00:17:55.381 } 00:17:55.381 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.382 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.382 request: 00:17:55.382 { 00:17:55.382 "name": "nvme0", 00:17:55.382 "trtype": "tcp", 00:17:55.382 "traddr": "10.0.0.1", 00:17:55.382 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:55.382 "adrfam": "ipv4", 00:17:55.382 "trsvcid": "4420", 00:17:55.382 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:55.382 "dhchap_key": "key1", 00:17:55.382 "dhchap_ctrlr_key": "ckey2", 00:17:55.382 "method": "bdev_nvme_attach_controller", 00:17:55.382 "req_id": 1 00:17:55.382 } 00:17:55.382 Got JSON-RPC error response 00:17:55.382 response: 00:17:55.382 { 00:17:55.382 "code": -5, 00:17:55.382 "message": "Input/output error" 00:17:55.382 } 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:55.641 rmmod nvme_tcp 00:17:55.641 rmmod nvme_fabrics 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 78577 ']' 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 78577 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 78577 ']' 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 78577 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 78577 00:17:55.641 killing process with pid 78577 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:55.641 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 78577' 00:17:55.642 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 78577 00:17:55.642 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 78577 00:17:55.900 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:55.900 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:55.900 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:55.900 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:55.900 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:55.900 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.900 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.900 08:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.900 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:55.900 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:55.900 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:55.900 08:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:55.900 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:55.900 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:17:55.900 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:55.900 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:55.900 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:55.900 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:55.900 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:55.900 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:55.900 08:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:56.468 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:56.726 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:56.726 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:56.726 08:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.k1U /tmp/spdk.key-null.RPY /tmp/spdk.key-sha256.Uld /tmp/spdk.key-sha384.UyM /tmp/spdk.key-sha512.UOt /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:56.726 08:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:57.295 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:57.295 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:57.295 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:57.295 ************************************ 00:17:57.295 END TEST nvmf_auth_host 00:17:57.295 ************************************ 00:17:57.295 00:17:57.295 real 0m34.205s 00:17:57.295 user 0m31.290s 00:17:57.295 sys 0m3.747s 00:17:57.295 08:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:57.295 08:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.295 08:13:18 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:17:57.295 08:13:18 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:57.295 08:13:18 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:57.295 08:13:18 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:57.295 08:13:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:57.295 ************************************ 00:17:57.295 START TEST nvmf_digest 00:17:57.295 ************************************ 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:57.295 * Looking for test storage... 00:17:57.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:57.295 Cannot find device "nvmf_tgt_br" 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:17:57.295 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:57.554 Cannot find device "nvmf_tgt_br2" 00:17:57.554 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:17:57.554 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:57.554 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:57.554 Cannot find device "nvmf_tgt_br" 00:17:57.554 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:17:57.554 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:57.554 Cannot find device "nvmf_tgt_br2" 00:17:57.554 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:17:57.554 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:57.554 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:57.554 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:57.554 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.554 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:57.554 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:57.554 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.554 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:57.554 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:57.554 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:57.554 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:57.554 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:57.554 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:57.554 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:57.554 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:57.555 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:57.555 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:57.555 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:57.555 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:57.555 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:57.555 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:57.555 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:57.555 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:57.555 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:57.555 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:57.555 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:57.555 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:57.555 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:57.555 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:57.813 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:57.813 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:57.813 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:57.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:17:57.813 00:17:57.813 --- 10.0.0.2 ping statistics --- 00:17:57.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.813 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:57.813 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:57.813 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:57.813 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:17:57.813 00:17:57.813 --- 10.0.0.3 ping statistics --- 00:17:57.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.813 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:57.813 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:57.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:57.813 00:17:57.813 --- 10.0.0.1 ping statistics --- 00:17:57.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.813 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:57.814 ************************************ 00:17:57.814 START TEST nvmf_digest_clean 00:17:57.814 ************************************ 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # run_digest 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=80138 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 80138 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 80138 ']' 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:57.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:57.814 08:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:57.814 [2024-06-10 08:13:19.547215] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:17:57.814 [2024-06-10 08:13:19.547320] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.072 [2024-06-10 08:13:19.692008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.072 [2024-06-10 08:13:19.806153] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.072 [2024-06-10 08:13:19.806220] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.072 [2024-06-10 08:13:19.806235] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.072 [2024-06-10 08:13:19.806246] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.072 [2024-06-10 08:13:19.806260] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.072 [2024-06-10 08:13:19.806291] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.008 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:59.008 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:17:59.008 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:59.008 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:59.008 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:59.008 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.008 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:59.008 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:59.008 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:59.009 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.009 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:59.009 [2024-06-10 08:13:20.634237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:59.009 null0 00:17:59.009 [2024-06-10 08:13:20.681912] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.009 [2024-06-10 08:13:20.705941] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.009 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.009 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:59.009 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:59.009 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:59.009 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:59.009 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:59.009 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:59.009 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:59.009 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80170 00:17:59.009 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80170 /var/tmp/bperf.sock 00:17:59.009 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:59.009 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 80170 ']' 00:17:59.009 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:59.009 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:59.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:59.009 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:59.009 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:59.009 08:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:59.009 [2024-06-10 08:13:20.769524] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:17:59.009 [2024-06-10 08:13:20.769634] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80170 ] 00:17:59.267 [2024-06-10 08:13:20.908413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.267 [2024-06-10 08:13:21.027721] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.205 08:13:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:00.205 08:13:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:18:00.205 08:13:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:00.205 08:13:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:00.205 08:13:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:00.205 [2024-06-10 08:13:21.990466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:00.205 08:13:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:00.205 08:13:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:00.770 nvme0n1 00:18:00.770 08:13:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:00.770 08:13:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:00.770 Running I/O for 2 seconds... 00:18:02.674 00:18:02.674 Latency(us) 00:18:02.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.674 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:02.674 nvme0n1 : 2.00 15860.86 61.96 0.00 0.00 8063.93 6791.91 18230.92 00:18:02.674 =================================================================================================================== 00:18:02.674 Total : 15860.86 61.96 0.00 0.00 8063.93 6791.91 18230.92 00:18:02.674 0 00:18:02.933 08:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:02.933 08:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:02.933 08:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:02.933 08:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:02.933 | select(.opcode=="crc32c") 00:18:02.933 | "\(.module_name) \(.executed)"' 00:18:02.933 08:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:03.191 08:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:03.191 08:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:03.191 08:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:03.191 08:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:03.191 08:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80170 00:18:03.191 08:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 80170 ']' 00:18:03.191 08:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 80170 00:18:03.191 08:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:18:03.191 08:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:03.191 08:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 80170 00:18:03.191 killing process with pid 80170 00:18:03.191 Received shutdown signal, test time was about 2.000000 seconds 00:18:03.191 00:18:03.191 Latency(us) 00:18:03.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.191 =================================================================================================================== 00:18:03.191 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:03.191 08:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:03.191 08:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:03.191 08:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 80170' 00:18:03.191 08:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 80170 00:18:03.191 08:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 80170 00:18:03.191 08:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:18:03.191 08:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:03.191 08:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:03.191 08:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:03.191 08:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:03.191 08:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:03.191 08:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:03.191 08:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:03.191 08:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80236 00:18:03.191 08:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80236 /var/tmp/bperf.sock 00:18:03.191 08:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 80236 ']' 00:18:03.191 08:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:03.191 08:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:03.191 08:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:03.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:03.191 08:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:03.191 08:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:03.450 [2024-06-10 08:13:25.084587] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:18:03.450 [2024-06-10 08:13:25.084885] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80236 ] 00:18:03.450 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:03.450 Zero copy mechanism will not be used. 00:18:03.450 [2024-06-10 08:13:25.214724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.450 [2024-06-10 08:13:25.307952] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.385 08:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:04.385 08:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:18:04.385 08:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:04.385 08:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:04.385 08:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:04.644 [2024-06-10 08:13:26.306507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:04.644 08:13:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:04.644 08:13:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:04.902 nvme0n1 00:18:04.902 08:13:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:04.902 08:13:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:05.160 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:05.160 Zero copy mechanism will not be used. 00:18:05.160 Running I/O for 2 seconds... 00:18:07.063 00:18:07.063 Latency(us) 00:18:07.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.063 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:07.063 nvme0n1 : 2.00 6433.26 804.16 0.00 0.00 2484.02 2040.55 4974.78 00:18:07.063 =================================================================================================================== 00:18:07.063 Total : 6433.26 804.16 0.00 0.00 2484.02 2040.55 4974.78 00:18:07.063 0 00:18:07.063 08:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:07.063 08:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:07.063 08:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:07.063 08:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:07.063 | select(.opcode=="crc32c") 00:18:07.063 | "\(.module_name) \(.executed)"' 00:18:07.063 08:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:07.322 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:07.322 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:07.322 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:07.322 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:07.322 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80236 00:18:07.322 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 80236 ']' 00:18:07.322 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 80236 00:18:07.322 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:18:07.322 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:07.322 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 80236 00:18:07.322 killing process with pid 80236 00:18:07.322 Received shutdown signal, test time was about 2.000000 seconds 00:18:07.322 00:18:07.322 Latency(us) 00:18:07.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.322 =================================================================================================================== 00:18:07.322 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:07.322 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:07.322 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:07.322 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 80236' 00:18:07.322 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 80236 00:18:07.322 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 80236 00:18:07.582 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:18:07.582 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:07.582 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:07.582 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:07.582 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:07.582 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:07.582 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:07.582 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80291 00:18:07.582 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:07.582 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80291 /var/tmp/bperf.sock 00:18:07.582 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 80291 ']' 00:18:07.582 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:07.582 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:07.582 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:07.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:07.582 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:07.582 08:13:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:07.582 [2024-06-10 08:13:29.330125] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:18:07.582 [2024-06-10 08:13:29.330374] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80291 ] 00:18:07.841 [2024-06-10 08:13:29.468785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.841 [2024-06-10 08:13:29.582134] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.409 08:13:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:08.409 08:13:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:18:08.409 08:13:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:08.409 08:13:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:08.409 08:13:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:08.999 [2024-06-10 08:13:30.559531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:08.999 08:13:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:08.999 08:13:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:09.257 nvme0n1 00:18:09.257 08:13:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:09.257 08:13:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:09.257 Running I/O for 2 seconds... 00:18:11.792 00:18:11.792 Latency(us) 00:18:11.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.792 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:11.792 nvme0n1 : 2.01 16628.81 64.96 0.00 0.00 7691.21 4617.31 15132.86 00:18:11.792 =================================================================================================================== 00:18:11.792 Total : 16628.81 64.96 0.00 0.00 7691.21 4617.31 15132.86 00:18:11.792 0 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:11.792 | select(.opcode=="crc32c") 00:18:11.792 | "\(.module_name) \(.executed)"' 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80291 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 80291 ']' 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 80291 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 80291 00:18:11.792 killing process with pid 80291 00:18:11.792 Received shutdown signal, test time was about 2.000000 seconds 00:18:11.792 00:18:11.792 Latency(us) 00:18:11.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.792 =================================================================================================================== 00:18:11.792 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 80291' 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 80291 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 80291 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80353 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80353 /var/tmp/bperf.sock 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 80353 ']' 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:11.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:11.792 08:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:11.792 [2024-06-10 08:13:33.624476] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:18:11.792 [2024-06-10 08:13:33.624812] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80353 ] 00:18:11.792 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:11.792 Zero copy mechanism will not be used. 00:18:12.051 [2024-06-10 08:13:33.761818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.051 [2024-06-10 08:13:33.867214] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.989 08:13:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:12.989 08:13:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:18:12.989 08:13:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:12.989 08:13:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:12.989 08:13:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:13.249 [2024-06-10 08:13:34.904737] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:13.249 08:13:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:13.249 08:13:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:13.508 nvme0n1 00:18:13.508 08:13:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:13.508 08:13:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:13.508 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:13.508 Zero copy mechanism will not be used. 00:18:13.508 Running I/O for 2 seconds... 00:18:16.041 00:18:16.041 Latency(us) 00:18:16.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.041 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:16.041 nvme0n1 : 2.00 4979.54 622.44 0.00 0.00 3206.40 2353.34 10843.23 00:18:16.041 =================================================================================================================== 00:18:16.041 Total : 4979.54 622.44 0.00 0.00 3206.40 2353.34 10843.23 00:18:16.041 0 00:18:16.041 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:16.041 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:16.041 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:16.041 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:16.041 | select(.opcode=="crc32c") 00:18:16.041 | "\(.module_name) \(.executed)"' 00:18:16.041 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:16.041 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:16.041 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:16.041 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:16.041 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:16.041 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80353 00:18:16.041 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 80353 ']' 00:18:16.041 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 80353 00:18:16.041 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:18:16.041 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:16.041 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 80353 00:18:16.041 killing process with pid 80353 00:18:16.041 Received shutdown signal, test time was about 2.000000 seconds 00:18:16.041 00:18:16.041 Latency(us) 00:18:16.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.041 =================================================================================================================== 00:18:16.041 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:16.041 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:16.041 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:16.041 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 80353' 00:18:16.041 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 80353 00:18:16.041 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 80353 00:18:16.300 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80138 00:18:16.300 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 80138 ']' 00:18:16.300 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 80138 00:18:16.300 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:18:16.300 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:16.300 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 80138 00:18:16.300 killing process with pid 80138 00:18:16.300 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:16.300 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:16.301 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 80138' 00:18:16.301 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 80138 00:18:16.301 08:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 80138 00:18:16.560 00:18:16.560 real 0m18.779s 00:18:16.560 user 0m34.917s 00:18:16.560 sys 0m5.814s 00:18:16.560 08:13:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:16.560 ************************************ 00:18:16.560 END TEST nvmf_digest_clean 00:18:16.560 ************************************ 00:18:16.560 08:13:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:16.560 08:13:38 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:18:16.560 08:13:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:18:16.560 08:13:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:16.560 08:13:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:16.560 ************************************ 00:18:16.560 START TEST nvmf_digest_error 00:18:16.560 ************************************ 00:18:16.560 08:13:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # run_digest_error 00:18:16.560 08:13:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:18:16.560 08:13:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:16.560 08:13:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:16.560 08:13:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:16.560 08:13:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=80442 00:18:16.560 08:13:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:16.560 08:13:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 80442 00:18:16.560 08:13:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 80442 ']' 00:18:16.560 08:13:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.560 08:13:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:16.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.560 08:13:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.560 08:13:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:16.560 08:13:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:16.560 [2024-06-10 08:13:38.373708] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:18:16.560 [2024-06-10 08:13:38.374172] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.819 [2024-06-10 08:13:38.513754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.819 [2024-06-10 08:13:38.654357] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.819 [2024-06-10 08:13:38.654434] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.819 [2024-06-10 08:13:38.654456] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.819 [2024-06-10 08:13:38.654465] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.819 [2024-06-10 08:13:38.654472] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.819 [2024-06-10 08:13:38.654503] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:17.753 [2024-06-10 08:13:39.387129] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:17.753 [2024-06-10 08:13:39.468431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:17.753 null0 00:18:17.753 [2024-06-10 08:13:39.526299] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.753 [2024-06-10 08:13:39.550481] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80474 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80474 /var/tmp/bperf.sock 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 80474 ']' 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:17.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:17.753 08:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:17.753 [2024-06-10 08:13:39.612506] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:18:17.753 [2024-06-10 08:13:39.612913] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80474 ] 00:18:18.012 [2024-06-10 08:13:39.755324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.272 [2024-06-10 08:13:39.893202] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.272 [2024-06-10 08:13:39.954500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:18.840 08:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:18.840 08:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:18:18.840 08:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:18.840 08:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:19.099 08:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:19.099 08:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.099 08:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:19.099 08:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.099 08:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:19.099 08:13:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:19.358 nvme0n1 00:18:19.358 08:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:19.358 08:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.358 08:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:19.359 08:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.359 08:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:19.359 08:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:19.618 Running I/O for 2 seconds... 00:18:19.618 [2024-06-10 08:13:41.343749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.618 [2024-06-10 08:13:41.343879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.618 [2024-06-10 08:13:41.343897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.618 [2024-06-10 08:13:41.360920] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.618 [2024-06-10 08:13:41.360959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.618 [2024-06-10 08:13:41.360991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.618 [2024-06-10 08:13:41.377793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.618 [2024-06-10 08:13:41.377867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.618 [2024-06-10 08:13:41.377900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.618 [2024-06-10 08:13:41.394173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.618 [2024-06-10 08:13:41.394212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.618 [2024-06-10 08:13:41.394242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.618 [2024-06-10 08:13:41.410499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.618 [2024-06-10 08:13:41.410535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.618 [2024-06-10 08:13:41.410565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.618 [2024-06-10 08:13:41.426899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.618 [2024-06-10 08:13:41.426933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.618 [2024-06-10 08:13:41.426962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.618 [2024-06-10 08:13:41.442951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.618 [2024-06-10 08:13:41.442990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.618 [2024-06-10 08:13:41.443020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.618 [2024-06-10 08:13:41.459130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.618 [2024-06-10 08:13:41.459165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.618 [2024-06-10 08:13:41.459195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.618 [2024-06-10 08:13:41.475971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.618 [2024-06-10 08:13:41.476008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.618 [2024-06-10 08:13:41.476022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.878 [2024-06-10 08:13:41.493575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.878 [2024-06-10 08:13:41.493612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.878 [2024-06-10 08:13:41.493642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.878 [2024-06-10 08:13:41.510829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.878 [2024-06-10 08:13:41.511046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.878 [2024-06-10 08:13:41.511081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.878 [2024-06-10 08:13:41.527536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.878 [2024-06-10 08:13:41.527583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.878 [2024-06-10 08:13:41.527614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.878 [2024-06-10 08:13:41.543730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.878 [2024-06-10 08:13:41.543768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.878 [2024-06-10 08:13:41.543811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.878 [2024-06-10 08:13:41.559906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.878 [2024-06-10 08:13:41.559946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.878 [2024-06-10 08:13:41.559975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.878 [2024-06-10 08:13:41.576356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.878 [2024-06-10 08:13:41.576394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.878 [2024-06-10 08:13:41.576423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.878 [2024-06-10 08:13:41.593265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.878 [2024-06-10 08:13:41.593301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.878 [2024-06-10 08:13:41.593331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.878 [2024-06-10 08:13:41.610165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.878 [2024-06-10 08:13:41.610203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.878 [2024-06-10 08:13:41.610216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.878 [2024-06-10 08:13:41.628005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.878 [2024-06-10 08:13:41.628044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.878 [2024-06-10 08:13:41.628059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.878 [2024-06-10 08:13:41.645545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.878 [2024-06-10 08:13:41.645584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.878 [2024-06-10 08:13:41.645613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.878 [2024-06-10 08:13:41.662688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.878 [2024-06-10 08:13:41.662725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.878 [2024-06-10 08:13:41.662755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.878 [2024-06-10 08:13:41.679694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.878 [2024-06-10 08:13:41.679731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.878 [2024-06-10 08:13:41.679762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.878 [2024-06-10 08:13:41.696682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.878 [2024-06-10 08:13:41.696719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.878 [2024-06-10 08:13:41.696749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.878 [2024-06-10 08:13:41.713961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.878 [2024-06-10 08:13:41.713998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.878 [2024-06-10 08:13:41.714028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.878 [2024-06-10 08:13:41.730616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:19.878 [2024-06-10 08:13:41.730652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.878 [2024-06-10 08:13:41.730683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.138 [2024-06-10 08:13:41.747081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.138 [2024-06-10 08:13:41.747117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.138 [2024-06-10 08:13:41.747147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.138 [2024-06-10 08:13:41.764310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.138 [2024-06-10 08:13:41.764344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.138 [2024-06-10 08:13:41.764374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.138 [2024-06-10 08:13:41.781251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.138 [2024-06-10 08:13:41.781288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.138 [2024-06-10 08:13:41.781317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.138 [2024-06-10 08:13:41.797676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.138 [2024-06-10 08:13:41.797728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.138 [2024-06-10 08:13:41.797758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.138 [2024-06-10 08:13:41.814362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.138 [2024-06-10 08:13:41.814400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.138 [2024-06-10 08:13:41.814429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.138 [2024-06-10 08:13:41.831115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.138 [2024-06-10 08:13:41.831168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.138 [2024-06-10 08:13:41.831198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.138 [2024-06-10 08:13:41.847320] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.138 [2024-06-10 08:13:41.847357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.138 [2024-06-10 08:13:41.847387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.138 [2024-06-10 08:13:41.863723] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.138 [2024-06-10 08:13:41.863775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.138 [2024-06-10 08:13:41.863850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.138 [2024-06-10 08:13:41.880124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.138 [2024-06-10 08:13:41.880161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.138 [2024-06-10 08:13:41.880191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.138 [2024-06-10 08:13:41.896254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.138 [2024-06-10 08:13:41.896297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.138 [2024-06-10 08:13:41.896327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.138 [2024-06-10 08:13:41.913118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.138 [2024-06-10 08:13:41.913163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.138 [2024-06-10 08:13:41.913186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.138 [2024-06-10 08:13:41.930314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.138 [2024-06-10 08:13:41.930351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.138 [2024-06-10 08:13:41.930381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.138 [2024-06-10 08:13:41.946663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.138 [2024-06-10 08:13:41.946699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.138 [2024-06-10 08:13:41.946729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.138 [2024-06-10 08:13:41.963133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.138 [2024-06-10 08:13:41.963169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.139 [2024-06-10 08:13:41.963199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.139 [2024-06-10 08:13:41.980305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.139 [2024-06-10 08:13:41.980342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.139 [2024-06-10 08:13:41.980371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.139 [2024-06-10 08:13:41.997044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.139 [2024-06-10 08:13:41.997079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.139 [2024-06-10 08:13:41.997109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.398 [2024-06-10 08:13:42.014478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.398 [2024-06-10 08:13:42.014514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.398 [2024-06-10 08:13:42.014543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.398 [2024-06-10 08:13:42.030982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.398 [2024-06-10 08:13:42.031018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.398 [2024-06-10 08:13:42.031047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.398 [2024-06-10 08:13:42.047879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.398 [2024-06-10 08:13:42.047911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.398 [2024-06-10 08:13:42.047924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.398 [2024-06-10 08:13:42.064280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.398 [2024-06-10 08:13:42.064316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.398 [2024-06-10 08:13:42.064346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.399 [2024-06-10 08:13:42.080678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.399 [2024-06-10 08:13:42.080714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.399 [2024-06-10 08:13:42.080744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.399 [2024-06-10 08:13:42.097844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.399 [2024-06-10 08:13:42.097887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.399 [2024-06-10 08:13:42.097917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.399 [2024-06-10 08:13:42.114469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.399 [2024-06-10 08:13:42.114503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.399 [2024-06-10 08:13:42.114533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.399 [2024-06-10 08:13:42.131436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.399 [2024-06-10 08:13:42.131484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.399 [2024-06-10 08:13:42.131513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.399 [2024-06-10 08:13:42.148205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.399 [2024-06-10 08:13:42.148241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.399 [2024-06-10 08:13:42.148271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.399 [2024-06-10 08:13:42.164477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.399 [2024-06-10 08:13:42.164512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.399 [2024-06-10 08:13:42.164542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.399 [2024-06-10 08:13:42.181057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.399 [2024-06-10 08:13:42.181094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.399 [2024-06-10 08:13:42.181124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.399 [2024-06-10 08:13:42.198237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.399 [2024-06-10 08:13:42.198273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.399 [2024-06-10 08:13:42.198286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.399 [2024-06-10 08:13:42.215003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.399 [2024-06-10 08:13:42.215039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.399 [2024-06-10 08:13:42.215069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.399 [2024-06-10 08:13:42.231859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.399 [2024-06-10 08:13:42.231894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.399 [2024-06-10 08:13:42.231923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.399 [2024-06-10 08:13:42.248422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.399 [2024-06-10 08:13:42.248458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.399 [2024-06-10 08:13:42.248487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.658 [2024-06-10 08:13:42.266146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.658 [2024-06-10 08:13:42.266183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.658 [2024-06-10 08:13:42.266214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.658 [2024-06-10 08:13:42.282954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.659 [2024-06-10 08:13:42.282990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.659 [2024-06-10 08:13:42.283019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.659 [2024-06-10 08:13:42.300242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.659 [2024-06-10 08:13:42.300302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.659 [2024-06-10 08:13:42.300332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.659 [2024-06-10 08:13:42.317617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.659 [2024-06-10 08:13:42.317652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.659 [2024-06-10 08:13:42.317681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.659 [2024-06-10 08:13:42.334735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.659 [2024-06-10 08:13:42.334772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.659 [2024-06-10 08:13:42.334831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.659 [2024-06-10 08:13:42.352831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.659 [2024-06-10 08:13:42.352885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.659 [2024-06-10 08:13:42.352899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.659 [2024-06-10 08:13:42.370444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.659 [2024-06-10 08:13:42.370483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.659 [2024-06-10 08:13:42.370513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.659 [2024-06-10 08:13:42.386880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.659 [2024-06-10 08:13:42.386915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.659 [2024-06-10 08:13:42.386945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.659 [2024-06-10 08:13:42.410487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.659 [2024-06-10 08:13:42.410534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.659 [2024-06-10 08:13:42.410564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.659 [2024-06-10 08:13:42.427555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.659 [2024-06-10 08:13:42.427591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.659 [2024-06-10 08:13:42.427621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.659 [2024-06-10 08:13:42.444293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.659 [2024-06-10 08:13:42.444343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.659 [2024-06-10 08:13:42.444373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.659 [2024-06-10 08:13:42.461222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.659 [2024-06-10 08:13:42.461283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.659 [2024-06-10 08:13:42.461312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.659 [2024-06-10 08:13:42.477935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.659 [2024-06-10 08:13:42.477971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.659 [2024-06-10 08:13:42.478001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.659 [2024-06-10 08:13:42.494918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.659 [2024-06-10 08:13:42.494954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.659 [2024-06-10 08:13:42.494984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.659 [2024-06-10 08:13:42.511754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.659 [2024-06-10 08:13:42.511817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.659 [2024-06-10 08:13:42.511858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.918 [2024-06-10 08:13:42.529042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.918 [2024-06-10 08:13:42.529080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.918 [2024-06-10 08:13:42.529095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.918 [2024-06-10 08:13:42.545678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.918 [2024-06-10 08:13:42.545716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.918 [2024-06-10 08:13:42.545746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.918 [2024-06-10 08:13:42.562169] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.918 [2024-06-10 08:13:42.562206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.918 [2024-06-10 08:13:42.562235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.918 [2024-06-10 08:13:42.578598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.918 [2024-06-10 08:13:42.578636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.918 [2024-06-10 08:13:42.578666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.918 [2024-06-10 08:13:42.594978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.918 [2024-06-10 08:13:42.595015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.918 [2024-06-10 08:13:42.595045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.918 [2024-06-10 08:13:42.611802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.918 [2024-06-10 08:13:42.611867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.918 [2024-06-10 08:13:42.611898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.918 [2024-06-10 08:13:42.627829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.918 [2024-06-10 08:13:42.627874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.918 [2024-06-10 08:13:42.627904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.918 [2024-06-10 08:13:42.643898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.918 [2024-06-10 08:13:42.643933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.918 [2024-06-10 08:13:42.643962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.918 [2024-06-10 08:13:42.659799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.919 [2024-06-10 08:13:42.659844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.919 [2024-06-10 08:13:42.659874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.919 [2024-06-10 08:13:42.675648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.919 [2024-06-10 08:13:42.675685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.919 [2024-06-10 08:13:42.675715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.919 [2024-06-10 08:13:42.691808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.919 [2024-06-10 08:13:42.691853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.919 [2024-06-10 08:13:42.691882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.919 [2024-06-10 08:13:42.707738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.919 [2024-06-10 08:13:42.707774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.919 [2024-06-10 08:13:42.707837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.919 [2024-06-10 08:13:42.723981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.919 [2024-06-10 08:13:42.724016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.919 [2024-06-10 08:13:42.724046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.919 [2024-06-10 08:13:42.739936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.919 [2024-06-10 08:13:42.739972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.919 [2024-06-10 08:13:42.740002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.919 [2024-06-10 08:13:42.755883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.919 [2024-06-10 08:13:42.755918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.919 [2024-06-10 08:13:42.755947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.919 [2024-06-10 08:13:42.772014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:20.919 [2024-06-10 08:13:42.772051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.919 [2024-06-10 08:13:42.772081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.178 [2024-06-10 08:13:42.788191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.178 [2024-06-10 08:13:42.788250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.178 [2024-06-10 08:13:42.788280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.178 [2024-06-10 08:13:42.804561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.178 [2024-06-10 08:13:42.804601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.178 [2024-06-10 08:13:42.804615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.178 [2024-06-10 08:13:42.820459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.178 [2024-06-10 08:13:42.820495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.178 [2024-06-10 08:13:42.820525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.178 [2024-06-10 08:13:42.837076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.178 [2024-06-10 08:13:42.837118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.178 [2024-06-10 08:13:42.837133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.178 [2024-06-10 08:13:42.853711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.178 [2024-06-10 08:13:42.853752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.178 [2024-06-10 08:13:42.853783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.178 [2024-06-10 08:13:42.870275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.178 [2024-06-10 08:13:42.870315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.178 [2024-06-10 08:13:42.870346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.178 [2024-06-10 08:13:42.886885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.178 [2024-06-10 08:13:42.886926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.178 [2024-06-10 08:13:42.886957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.178 [2024-06-10 08:13:42.903112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.178 [2024-06-10 08:13:42.903159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.178 [2024-06-10 08:13:42.903190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.178 [2024-06-10 08:13:42.919329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.178 [2024-06-10 08:13:42.919367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.178 [2024-06-10 08:13:42.919397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.178 [2024-06-10 08:13:42.935365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.178 [2024-06-10 08:13:42.935401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.178 [2024-06-10 08:13:42.935431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.178 [2024-06-10 08:13:42.951534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.178 [2024-06-10 08:13:42.951571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.178 [2024-06-10 08:13:42.951601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.178 [2024-06-10 08:13:42.967572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.178 [2024-06-10 08:13:42.967610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.178 [2024-06-10 08:13:42.967624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.179 [2024-06-10 08:13:42.983550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.179 [2024-06-10 08:13:42.983592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.179 [2024-06-10 08:13:42.983623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.179 [2024-06-10 08:13:42.999600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.179 [2024-06-10 08:13:42.999639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.179 [2024-06-10 08:13:42.999669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.179 [2024-06-10 08:13:43.015743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.179 [2024-06-10 08:13:43.015811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.179 [2024-06-10 08:13:43.015844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.179 [2024-06-10 08:13:43.031855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.179 [2024-06-10 08:13:43.031893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.179 [2024-06-10 08:13:43.031923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.438 [2024-06-10 08:13:43.048132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.438 [2024-06-10 08:13:43.048199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.438 [2024-06-10 08:13:43.048229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.438 [2024-06-10 08:13:43.064521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.438 [2024-06-10 08:13:43.064557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.438 [2024-06-10 08:13:43.064587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.438 [2024-06-10 08:13:43.080449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.438 [2024-06-10 08:13:43.080484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.438 [2024-06-10 08:13:43.080514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.438 [2024-06-10 08:13:43.096487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.438 [2024-06-10 08:13:43.096522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.439 [2024-06-10 08:13:43.096550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.439 [2024-06-10 08:13:43.112298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.439 [2024-06-10 08:13:43.112332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.439 [2024-06-10 08:13:43.112361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.439 [2024-06-10 08:13:43.128228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.439 [2024-06-10 08:13:43.128264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.439 [2024-06-10 08:13:43.128293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.439 [2024-06-10 08:13:43.143979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.439 [2024-06-10 08:13:43.144013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.439 [2024-06-10 08:13:43.144042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.439 [2024-06-10 08:13:43.159878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.439 [2024-06-10 08:13:43.159912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.439 [2024-06-10 08:13:43.159941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.439 [2024-06-10 08:13:43.175654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.439 [2024-06-10 08:13:43.175689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.439 [2024-06-10 08:13:43.175718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.439 [2024-06-10 08:13:43.191697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.439 [2024-06-10 08:13:43.191732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.439 [2024-06-10 08:13:43.191761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.439 [2024-06-10 08:13:43.207624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.439 [2024-06-10 08:13:43.207659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.439 [2024-06-10 08:13:43.207689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.439 [2024-06-10 08:13:43.223606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.439 [2024-06-10 08:13:43.223641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.439 [2024-06-10 08:13:43.223670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.439 [2024-06-10 08:13:43.239718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.439 [2024-06-10 08:13:43.239753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.439 [2024-06-10 08:13:43.239781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.439 [2024-06-10 08:13:43.255796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.439 [2024-06-10 08:13:43.255830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.439 [2024-06-10 08:13:43.255858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.439 [2024-06-10 08:13:43.271999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.439 [2024-06-10 08:13:43.272036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.439 [2024-06-10 08:13:43.272066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.439 [2024-06-10 08:13:43.288120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.439 [2024-06-10 08:13:43.288155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.439 [2024-06-10 08:13:43.288184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.439 [2024-06-10 08:13:43.304415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.439 [2024-06-10 08:13:43.304452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.439 [2024-06-10 08:13:43.304482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.698 [2024-06-10 08:13:43.320761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c5ef0) 00:18:21.698 [2024-06-10 08:13:43.320820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.698 [2024-06-10 08:13:43.320834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.698 00:18:21.698 Latency(us) 00:18:21.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.698 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:21.698 nvme0n1 : 2.01 15289.96 59.73 0.00 0.00 8365.57 7596.22 32648.84 00:18:21.698 =================================================================================================================== 00:18:21.698 Total : 15289.96 59.73 0.00 0.00 8365.57 7596.22 32648.84 00:18:21.698 0 00:18:21.698 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:21.698 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:21.698 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:21.698 | .driver_specific 00:18:21.698 | .nvme_error 00:18:21.698 | .status_code 00:18:21.698 | .command_transient_transport_error' 00:18:21.698 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:21.957 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 120 > 0 )) 00:18:21.957 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80474 00:18:21.957 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 80474 ']' 00:18:21.957 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 80474 00:18:21.957 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:18:21.957 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:21.957 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 80474 00:18:21.957 killing process with pid 80474 00:18:21.957 Received shutdown signal, test time was about 2.000000 seconds 00:18:21.957 00:18:21.957 Latency(us) 00:18:21.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.957 =================================================================================================================== 00:18:21.957 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:21.957 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:21.957 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:21.957 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 80474' 00:18:21.957 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 80474 00:18:21.957 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 80474 00:18:22.218 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:22.218 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:22.218 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:22.218 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:22.218 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:22.218 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80529 00:18:22.218 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:22.218 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80529 /var/tmp/bperf.sock 00:18:22.218 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 80529 ']' 00:18:22.218 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:22.218 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:22.218 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:22.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:22.218 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:22.218 08:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:22.218 [2024-06-10 08:13:43.910403] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:18:22.218 [2024-06-10 08:13:43.910759] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80529 ] 00:18:22.218 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:22.218 Zero copy mechanism will not be used. 00:18:22.218 [2024-06-10 08:13:44.051082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.477 [2024-06-10 08:13:44.211431] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.477 [2024-06-10 08:13:44.270699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:23.045 08:13:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:23.045 08:13:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:18:23.045 08:13:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:23.045 08:13:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:23.305 08:13:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:23.305 08:13:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:23.305 08:13:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:23.305 08:13:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:23.305 08:13:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:23.305 08:13:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:23.874 nvme0n1 00:18:23.874 08:13:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:23.874 08:13:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:23.874 08:13:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:23.874 08:13:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:23.874 08:13:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:23.874 08:13:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:23.874 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:23.874 Zero copy mechanism will not be used. 00:18:23.874 Running I/O for 2 seconds... 00:18:23.874 [2024-06-10 08:13:45.554965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.874 [2024-06-10 08:13:45.555019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.874 [2024-06-10 08:13:45.555067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.874 [2024-06-10 08:13:45.560421] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.874 [2024-06-10 08:13:45.560461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.874 [2024-06-10 08:13:45.560491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.874 [2024-06-10 08:13:45.565554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.874 [2024-06-10 08:13:45.565591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.874 [2024-06-10 08:13:45.565620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.874 [2024-06-10 08:13:45.570546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.874 [2024-06-10 08:13:45.570598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.874 [2024-06-10 08:13:45.570642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.874 [2024-06-10 08:13:45.575747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.874 [2024-06-10 08:13:45.575810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.874 [2024-06-10 08:13:45.575842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.874 [2024-06-10 08:13:45.580941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.874 [2024-06-10 08:13:45.580979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.874 [2024-06-10 08:13:45.581010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.874 [2024-06-10 08:13:45.585972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.874 [2024-06-10 08:13:45.586008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.874 [2024-06-10 08:13:45.586037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.874 [2024-06-10 08:13:45.591113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.874 [2024-06-10 08:13:45.591151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.874 [2024-06-10 08:13:45.591180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.874 [2024-06-10 08:13:45.596324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.874 [2024-06-10 08:13:45.596360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.874 [2024-06-10 08:13:45.596390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.874 [2024-06-10 08:13:45.601535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.874 [2024-06-10 08:13:45.601583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.874 [2024-06-10 08:13:45.601612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.874 [2024-06-10 08:13:45.606790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.874 [2024-06-10 08:13:45.606836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.874 [2024-06-10 08:13:45.606866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.874 [2024-06-10 08:13:45.611806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.874 [2024-06-10 08:13:45.611841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.874 [2024-06-10 08:13:45.611869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.874 [2024-06-10 08:13:45.616678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.874 [2024-06-10 08:13:45.616715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.616743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.621751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.621831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.621846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.626844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.626879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.626907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.631823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.631858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.631886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.636674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.636710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.636739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.641751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.641815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.641846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.646862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.646897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.646925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.651904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.651939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.651968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.656889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.656926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.656956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.662187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.662222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.662252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.667199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.667234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.667263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.672406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.672441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.672469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.677524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.677559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.677588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.682462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.682497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.682526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.687526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.687577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.687605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.692574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.692609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.692639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.697737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.697771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.697828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.703022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.703057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.703085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.708052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.708088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.708117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.713170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.713220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.713249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.718180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.718215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.718244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.723336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.723372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.723401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.728498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.728532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.728561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.733686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.733730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.733758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.875 [2024-06-10 08:13:45.739059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:23.875 [2024-06-10 08:13:45.739096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.875 [2024-06-10 08:13:45.739125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.136 [2024-06-10 08:13:45.744268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.136 [2024-06-10 08:13:45.744304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.136 [2024-06-10 08:13:45.744333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.136 [2024-06-10 08:13:45.749437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.136 [2024-06-10 08:13:45.749472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.136 [2024-06-10 08:13:45.749501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.136 [2024-06-10 08:13:45.754381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.136 [2024-06-10 08:13:45.754416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.136 [2024-06-10 08:13:45.754444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.136 [2024-06-10 08:13:45.759379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.136 [2024-06-10 08:13:45.759415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.136 [2024-06-10 08:13:45.759443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.136 [2024-06-10 08:13:45.764397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.136 [2024-06-10 08:13:45.764432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.136 [2024-06-10 08:13:45.764474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.136 [2024-06-10 08:13:45.769542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.136 [2024-06-10 08:13:45.769576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.136 [2024-06-10 08:13:45.769604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.136 [2024-06-10 08:13:45.774571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.136 [2024-06-10 08:13:45.774612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.136 [2024-06-10 08:13:45.774640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.779683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.779719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.779747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.784626] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.784661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.784689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.789870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.789905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.789933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.795048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.795083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.795112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.800267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.800303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.800332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.805395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.805430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.805459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.810396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.810431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.810459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.815520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.815555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.815583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.820448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.820483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.820512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.825469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.825504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.825532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.830405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.830440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.830469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.835366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.835400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.835428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.840366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.840401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.840429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.845521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.845556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.845585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.850685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.850737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.850767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.856012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.856047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.856076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.861023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.861060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.861089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.866180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.866214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.866243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.871165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.871200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.871229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.876189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.876225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.876238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.881295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.881329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.881357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.886287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.886335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.886363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.891394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.891428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.891456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.896748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.896806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.896837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.902122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.902157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.902186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.907350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.907384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.907413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.137 [2024-06-10 08:13:45.912591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.137 [2024-06-10 08:13:45.912626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.137 [2024-06-10 08:13:45.912654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.138 [2024-06-10 08:13:45.917923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.138 [2024-06-10 08:13:45.917957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.138 [2024-06-10 08:13:45.917985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.138 [2024-06-10 08:13:45.923015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.138 [2024-06-10 08:13:45.923049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.138 [2024-06-10 08:13:45.923077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.138 [2024-06-10 08:13:45.928002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.138 [2024-06-10 08:13:45.928053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.138 [2024-06-10 08:13:45.928082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.138 [2024-06-10 08:13:45.932945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.138 [2024-06-10 08:13:45.932988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.138 [2024-06-10 08:13:45.933002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.138 [2024-06-10 08:13:45.937802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.138 [2024-06-10 08:13:45.937892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.138 [2024-06-10 08:13:45.937919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.138 [2024-06-10 08:13:45.942981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.138 [2024-06-10 08:13:45.943016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.138 [2024-06-10 08:13:45.943044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.138 [2024-06-10 08:13:45.948070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.138 [2024-06-10 08:13:45.948105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.138 [2024-06-10 08:13:45.948133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.138 [2024-06-10 08:13:45.953416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.138 [2024-06-10 08:13:45.953450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.138 [2024-06-10 08:13:45.953479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.138 [2024-06-10 08:13:45.958499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.138 [2024-06-10 08:13:45.958534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.138 [2024-06-10 08:13:45.958563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.138 [2024-06-10 08:13:45.963808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.138 [2024-06-10 08:13:45.963845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.138 [2024-06-10 08:13:45.963875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.138 [2024-06-10 08:13:45.969027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.138 [2024-06-10 08:13:45.969062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.138 [2024-06-10 08:13:45.969091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.138 [2024-06-10 08:13:45.974435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.138 [2024-06-10 08:13:45.974493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.138 [2024-06-10 08:13:45.974536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.138 [2024-06-10 08:13:45.979688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.138 [2024-06-10 08:13:45.979723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.138 [2024-06-10 08:13:45.979752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.138 [2024-06-10 08:13:45.984815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.138 [2024-06-10 08:13:45.984901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.138 [2024-06-10 08:13:45.984916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.138 [2024-06-10 08:13:45.990083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.138 [2024-06-10 08:13:45.990117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.138 [2024-06-10 08:13:45.990154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.138 [2024-06-10 08:13:45.995438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.138 [2024-06-10 08:13:45.995473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.138 [2024-06-10 08:13:45.995502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.138 [2024-06-10 08:13:46.000987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.138 [2024-06-10 08:13:46.001025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.138 [2024-06-10 08:13:46.001039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.399 [2024-06-10 08:13:46.006579] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.399 [2024-06-10 08:13:46.006615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.399 [2024-06-10 08:13:46.006643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.399 [2024-06-10 08:13:46.011686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.399 [2024-06-10 08:13:46.011723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.399 [2024-06-10 08:13:46.011751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.399 [2024-06-10 08:13:46.016695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.399 [2024-06-10 08:13:46.016732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.399 [2024-06-10 08:13:46.016761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.399 [2024-06-10 08:13:46.021838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.399 [2024-06-10 08:13:46.021883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.399 [2024-06-10 08:13:46.021912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.399 [2024-06-10 08:13:46.027008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.399 [2024-06-10 08:13:46.027044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.399 [2024-06-10 08:13:46.027073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.399 [2024-06-10 08:13:46.032245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.399 [2024-06-10 08:13:46.032280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.399 [2024-06-10 08:13:46.032321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.399 [2024-06-10 08:13:46.037509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.399 [2024-06-10 08:13:46.037544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.399 [2024-06-10 08:13:46.037574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.399 [2024-06-10 08:13:46.042885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.399 [2024-06-10 08:13:46.042921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.399 [2024-06-10 08:13:46.042949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.399 [2024-06-10 08:13:46.048073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.399 [2024-06-10 08:13:46.048108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.399 [2024-06-10 08:13:46.048138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.399 [2024-06-10 08:13:46.053337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.399 [2024-06-10 08:13:46.053372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.399 [2024-06-10 08:13:46.053402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.399 [2024-06-10 08:13:46.058576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.399 [2024-06-10 08:13:46.058612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.058641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.063743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.063808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.063840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.068958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.068994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.069023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.074091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.074126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.074155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.079381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.079416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.079445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.084589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.084625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.084655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.089912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.089963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.090007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.095223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.095260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.095290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.100219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.100254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.100283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.105197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.105249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.105279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.110160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.110195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.110225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.115093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.115127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.115156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.120032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.120066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.120095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.124997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.125034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.125064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.129993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.130027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.130056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.134966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.135001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.135030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.139936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.139971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.140000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.144932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.144968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.144997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.149979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.150013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.150042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.154964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.154998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.155026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.159905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.159939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.159968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.164944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.164980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.165009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.170100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.170134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.170164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.175322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.175358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.175386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.180625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.180661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.180691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.186012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.186047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.186076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.191409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.191444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.400 [2024-06-10 08:13:46.191482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.400 [2024-06-10 08:13:46.196767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.400 [2024-06-10 08:13:46.196846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.401 [2024-06-10 08:13:46.196884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.401 [2024-06-10 08:13:46.202110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.401 [2024-06-10 08:13:46.202145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.401 [2024-06-10 08:13:46.202175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.401 [2024-06-10 08:13:46.207296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.401 [2024-06-10 08:13:46.207331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.401 [2024-06-10 08:13:46.207361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.401 [2024-06-10 08:13:46.212352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.401 [2024-06-10 08:13:46.212386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.401 [2024-06-10 08:13:46.212416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.401 [2024-06-10 08:13:46.217443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.401 [2024-06-10 08:13:46.217479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.401 [2024-06-10 08:13:46.217508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.401 [2024-06-10 08:13:46.222425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.401 [2024-06-10 08:13:46.222460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.401 [2024-06-10 08:13:46.222496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.401 [2024-06-10 08:13:46.227478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.401 [2024-06-10 08:13:46.227522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.401 [2024-06-10 08:13:46.227551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.401 [2024-06-10 08:13:46.232599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.401 [2024-06-10 08:13:46.232651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.401 [2024-06-10 08:13:46.232679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.401 [2024-06-10 08:13:46.237630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.401 [2024-06-10 08:13:46.237665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.401 [2024-06-10 08:13:46.237694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.401 [2024-06-10 08:13:46.242504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.401 [2024-06-10 08:13:46.242540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.401 [2024-06-10 08:13:46.242569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.401 [2024-06-10 08:13:46.247401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.401 [2024-06-10 08:13:46.247438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.401 [2024-06-10 08:13:46.247479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.401 [2024-06-10 08:13:46.252386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.401 [2024-06-10 08:13:46.252422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.401 [2024-06-10 08:13:46.252452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.401 [2024-06-10 08:13:46.257497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.401 [2024-06-10 08:13:46.257532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.401 [2024-06-10 08:13:46.257561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.401 [2024-06-10 08:13:46.262459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.401 [2024-06-10 08:13:46.262494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.401 [2024-06-10 08:13:46.262523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.267599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.267634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.267664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.272823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.272907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.272920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.277881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.277943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.277972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.282807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.282889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.282904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.288088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.288127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.288157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.293161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.293200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.293213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.298095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.298132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.298146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.303016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.303052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.303066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.307899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.307937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.307951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.312907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.312944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.312958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.317755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.317837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.317853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.322819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.322854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.322883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.327849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.327884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.327914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.332830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.332900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.332930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.337953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.337988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.338017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.342915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.342950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.342978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.347961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.348005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.348035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.353105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.353142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.353177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.358265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.358300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.358329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.363136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.363170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.363199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.368073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.368108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.368137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.373087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.373124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.373155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.378213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.378248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.378277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.383379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.383413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.383443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.388367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.388403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.388432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.672 [2024-06-10 08:13:46.393312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.672 [2024-06-10 08:13:46.393348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.672 [2024-06-10 08:13:46.393377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.398287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.398323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.398353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.403334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.403369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.403410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.408474] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.408516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.408545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.413607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.413643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.413672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.418972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.419010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.419024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.424152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.424189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.424203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.429168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.429206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.429221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.434356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.434392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.434422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.439624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.439660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.439688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.444837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.444893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.444908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.450006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.450043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.450057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.455107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.455144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.455158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.460300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.460335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.460364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.465222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.465257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.465270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.470444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.470494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.470527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.476033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.476071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.476085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.481713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.481751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.481766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.487032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.487070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.487084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.492357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.492393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.492422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.497923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.497960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.497974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.503079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.503116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.503146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.508139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.508174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.508204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.513333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.513394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.513423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.518300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.518337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.518367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.523343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.523381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.523395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.673 [2024-06-10 08:13:46.528262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.673 [2024-06-10 08:13:46.528299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.673 [2024-06-10 08:13:46.528313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.533213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.533258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.533273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.538177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.538214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.538228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.543257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.543294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.543309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.548180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.548217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.548231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.553006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.553044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.553059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.557917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.557952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.557982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.562952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.562990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.563004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.568007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.568043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.568057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.573006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.573044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.573059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.577862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.577897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.577928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.582731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.582767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.582812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.587788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.587848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.587862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.592668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.592704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.592733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.597540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.597577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.597607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.602459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.602665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.602686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.607731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.607933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.608076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.613115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.613290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.613507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.618612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.618813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.618932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.624019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.624220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.624368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.629600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.629774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.630051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.635394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.635435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.635476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.640700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.640743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.640773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.645809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.645878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.645893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.650883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.650919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.650949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.655933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.944 [2024-06-10 08:13:46.655968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.944 [2024-06-10 08:13:46.655997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.944 [2024-06-10 08:13:46.660880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.660935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.660949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.665970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.666005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.666035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.671024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.671060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.671089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.676089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.676125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.676155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.681088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.681136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.681150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.686146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.686181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.686210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.691184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.691234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.691263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.696117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.696161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.696190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.701180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.701298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.701343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.706329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.706364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.706393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.711773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.711854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.711869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.716813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.716848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.716901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.721917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.721952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.721981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.726905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.726940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.726969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.731801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.731847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.731876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.736684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.736719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.736748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.741660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.741696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.741725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.746540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.746589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.746618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.751528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.751563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.751592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.756437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.756482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.756511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.761542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.761577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.761606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.766725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.766761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.766791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.771879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.771925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.771954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.777172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.777223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.777253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.782493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.782529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.782558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.787697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.787733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.787762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.792951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.945 [2024-06-10 08:13:46.792988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.945 [2024-06-10 08:13:46.793024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.945 [2024-06-10 08:13:46.798514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.946 [2024-06-10 08:13:46.798549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.946 [2024-06-10 08:13:46.798577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.946 [2024-06-10 08:13:46.804033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:24.946 [2024-06-10 08:13:46.804070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.946 [2024-06-10 08:13:46.804099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.206 [2024-06-10 08:13:46.809301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.206 [2024-06-10 08:13:46.809338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.809353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.814555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.814608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.814623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.819775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.819868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.819892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.825002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.825039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.825053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.830295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.830331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.830362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.835602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.835639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.835669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.840983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.841021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.841035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.846398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.846434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.846476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.851654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.851689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.851718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.857294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.857329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.857359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.862602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.862638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.862667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.867991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.868026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.868039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.873104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.873142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.873156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.878400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.878435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.878477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.883730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.883765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.883809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.888959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.888997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.889011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.894154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.894188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.894217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.899351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.899386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.899415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.904619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.904654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.904684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.910022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.910059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.910089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.915309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.915344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.915373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.920945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.920982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.920997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.926295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.926330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.926359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.931463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.931507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.931536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.936687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.936723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.936752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.942025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.942060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.942089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.947326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.207 [2024-06-10 08:13:46.947360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.207 [2024-06-10 08:13:46.947391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.207 [2024-06-10 08:13:46.952598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:46.952633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:46.952663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:46.958083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:46.958118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:46.958148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:46.963232] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:46.963267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:46.963297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:46.968501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:46.968537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:46.968566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:46.973835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:46.973911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:46.973942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:46.979116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:46.979162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:46.979192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:46.984569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:46.984636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:46.984665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:46.989803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:46.989895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:46.989910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:46.995364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:46.995399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:46.995444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:47.000967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:47.001004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:47.001018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:47.006262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:47.006301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:47.006315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:47.011384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:47.011421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:47.011436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:47.016693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:47.016729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:47.016758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:47.022065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:47.022101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:47.022130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:47.027389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:47.027424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:47.027453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:47.032655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:47.032706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:47.032735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:47.038134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:47.038170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:47.038198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:47.043385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:47.043422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:47.043451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:47.048664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:47.048699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:47.048727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:47.053928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:47.053962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:47.053991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:47.059101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:47.059136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:47.059165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:47.064365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:47.064400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:47.064428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.208 [2024-06-10 08:13:47.069785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.208 [2024-06-10 08:13:47.069849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.208 [2024-06-10 08:13:47.069880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.469 [2024-06-10 08:13:47.075205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.469 [2024-06-10 08:13:47.075240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.469 [2024-06-10 08:13:47.075268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.469 [2024-06-10 08:13:47.080697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.469 [2024-06-10 08:13:47.080732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.469 [2024-06-10 08:13:47.080761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.469 [2024-06-10 08:13:47.086073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.469 [2024-06-10 08:13:47.086108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.469 [2024-06-10 08:13:47.086137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.469 [2024-06-10 08:13:47.091214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.469 [2024-06-10 08:13:47.091249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.469 [2024-06-10 08:13:47.091278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.469 [2024-06-10 08:13:47.096677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.469 [2024-06-10 08:13:47.096713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.469 [2024-06-10 08:13:47.096741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.469 [2024-06-10 08:13:47.102025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.469 [2024-06-10 08:13:47.102060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.469 [2024-06-10 08:13:47.102089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.469 [2024-06-10 08:13:47.107363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.469 [2024-06-10 08:13:47.107407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.469 [2024-06-10 08:13:47.107436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.469 [2024-06-10 08:13:47.112644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.469 [2024-06-10 08:13:47.112680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.469 [2024-06-10 08:13:47.112709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.469 [2024-06-10 08:13:47.117958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.469 [2024-06-10 08:13:47.117994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.469 [2024-06-10 08:13:47.118034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.469 [2024-06-10 08:13:47.123476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.469 [2024-06-10 08:13:47.123512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.469 [2024-06-10 08:13:47.123542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.469 [2024-06-10 08:13:47.128649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.469 [2024-06-10 08:13:47.128684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.469 [2024-06-10 08:13:47.128714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.469 [2024-06-10 08:13:47.133925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.469 [2024-06-10 08:13:47.133960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.469 [2024-06-10 08:13:47.133989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.469 [2024-06-10 08:13:47.139085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.469 [2024-06-10 08:13:47.139122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.469 [2024-06-10 08:13:47.139152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.469 [2024-06-10 08:13:47.144309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.144344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.144374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.149616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.149651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.149680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.154839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.154884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.154913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.160092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.160127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.160156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.165287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.165322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.165351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.170426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.170461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.170500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.175615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.175651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.175680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.180863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.180930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.180960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.186214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.186249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.186278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.191341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.191376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.191405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.196341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.196376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.196406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.201463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.201505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.201534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.206413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.206449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.206478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.211484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.211520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.211549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.216475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.216511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.216540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.221732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.221770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.221815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.226731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.226765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.226834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.231772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.231833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.231859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.236604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.236640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.236669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.241670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.241871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.241888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.246890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.246927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.246957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.251874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.251909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.251939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.256687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.256723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.256752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.261720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.261757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.261793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.266946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.266982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.267011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.272149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.272196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.272235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.277418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.277454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.470 [2024-06-10 08:13:47.277483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.470 [2024-06-10 08:13:47.282634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.470 [2024-06-10 08:13:47.282670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.471 [2024-06-10 08:13:47.282699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.471 [2024-06-10 08:13:47.287864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.471 [2024-06-10 08:13:47.287907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.471 [2024-06-10 08:13:47.287936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.471 [2024-06-10 08:13:47.293138] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.471 [2024-06-10 08:13:47.293186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.471 [2024-06-10 08:13:47.293200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.471 [2024-06-10 08:13:47.298442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.471 [2024-06-10 08:13:47.298486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.471 [2024-06-10 08:13:47.298524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.471 [2024-06-10 08:13:47.303611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.471 [2024-06-10 08:13:47.303647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.471 [2024-06-10 08:13:47.303676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.471 [2024-06-10 08:13:47.308907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.471 [2024-06-10 08:13:47.308943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.471 [2024-06-10 08:13:47.308973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.471 [2024-06-10 08:13:47.313944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.471 [2024-06-10 08:13:47.313978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.471 [2024-06-10 08:13:47.314008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.471 [2024-06-10 08:13:47.318805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.471 [2024-06-10 08:13:47.318840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.471 [2024-06-10 08:13:47.318869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.471 [2024-06-10 08:13:47.323734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.471 [2024-06-10 08:13:47.323771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.471 [2024-06-10 08:13:47.323833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.471 [2024-06-10 08:13:47.328879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.471 [2024-06-10 08:13:47.328931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.471 [2024-06-10 08:13:47.328960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.471 [2024-06-10 08:13:47.334013] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.471 [2024-06-10 08:13:47.334049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.471 [2024-06-10 08:13:47.334078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.731 [2024-06-10 08:13:47.339045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.731 [2024-06-10 08:13:47.339081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.731 [2024-06-10 08:13:47.339110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.731 [2024-06-10 08:13:47.344006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.731 [2024-06-10 08:13:47.344041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.731 [2024-06-10 08:13:47.344070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.731 [2024-06-10 08:13:47.348964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.731 [2024-06-10 08:13:47.349002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.731 [2024-06-10 08:13:47.349016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.731 [2024-06-10 08:13:47.353941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.731 [2024-06-10 08:13:47.353975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.731 [2024-06-10 08:13:47.354004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.731 [2024-06-10 08:13:47.358890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.731 [2024-06-10 08:13:47.358926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.731 [2024-06-10 08:13:47.358955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.731 [2024-06-10 08:13:47.363939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.731 [2024-06-10 08:13:47.363977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.731 [2024-06-10 08:13:47.364007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.731 [2024-06-10 08:13:47.368907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.731 [2024-06-10 08:13:47.368945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.731 [2024-06-10 08:13:47.368959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.374046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.374081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.374110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.378894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.378929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.378959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.383914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.383947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.383976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.388736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.388771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.388832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.393687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.393723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.393752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.398585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.398620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.398650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.403589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.403623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.403652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.408609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.408646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.408677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.413747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.413811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.413842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.418569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.418605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.418635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.423559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.423594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.423623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.428547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.428583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.428613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.433746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.433810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.433840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.438779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.438842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.438857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.443922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.443959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.443973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.449222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.449271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.449300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.454334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.454370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.454399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.459617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.459677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.459706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.464722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.464758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.464788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.469853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.469901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.469915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.475049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.475087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.475102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.480182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.480233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.480263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.485300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.485335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.485364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.490555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.490605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.732 [2024-06-10 08:13:47.490634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.732 [2024-06-10 08:13:47.495689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.732 [2024-06-10 08:13:47.495740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.733 [2024-06-10 08:13:47.495768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.733 [2024-06-10 08:13:47.500950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.733 [2024-06-10 08:13:47.500988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.733 [2024-06-10 08:13:47.501002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.733 [2024-06-10 08:13:47.506087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.733 [2024-06-10 08:13:47.506121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.733 [2024-06-10 08:13:47.506150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.733 [2024-06-10 08:13:47.511055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.733 [2024-06-10 08:13:47.511090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.733 [2024-06-10 08:13:47.511120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.733 [2024-06-10 08:13:47.516114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.733 [2024-06-10 08:13:47.516150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.733 [2024-06-10 08:13:47.516179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.733 [2024-06-10 08:13:47.521094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.733 [2024-06-10 08:13:47.521132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.733 [2024-06-10 08:13:47.521146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.733 [2024-06-10 08:13:47.526055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.733 [2024-06-10 08:13:47.526089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.733 [2024-06-10 08:13:47.526119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.733 [2024-06-10 08:13:47.531177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.733 [2024-06-10 08:13:47.531212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.733 [2024-06-10 08:13:47.531242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.733 [2024-06-10 08:13:47.536164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.733 [2024-06-10 08:13:47.536216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.733 [2024-06-10 08:13:47.536246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.733 [2024-06-10 08:13:47.541359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.733 [2024-06-10 08:13:47.541395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.733 [2024-06-10 08:13:47.541424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.733 [2024-06-10 08:13:47.546425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x167c8c0) 00:18:25.733 [2024-06-10 08:13:47.546461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.733 [2024-06-10 08:13:47.546491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.733 00:18:25.733 Latency(us) 00:18:25.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.733 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:25.733 nvme0n1 : 2.00 6016.90 752.11 0.00 0.00 2655.86 2278.87 5779.08 00:18:25.733 =================================================================================================================== 00:18:25.733 Total : 6016.90 752.11 0.00 0.00 2655.86 2278.87 5779.08 00:18:25.733 0 00:18:25.733 08:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:25.733 08:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:25.733 08:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:25.733 08:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:25.733 | .driver_specific 00:18:25.733 | .nvme_error 00:18:25.733 | .status_code 00:18:25.733 | .command_transient_transport_error' 00:18:25.992 08:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 388 > 0 )) 00:18:25.992 08:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80529 00:18:25.992 08:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 80529 ']' 00:18:25.992 08:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 80529 00:18:25.992 08:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:18:25.992 08:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:25.992 08:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 80529 00:18:26.251 killing process with pid 80529 00:18:26.251 Received shutdown signal, test time was about 2.000000 seconds 00:18:26.251 00:18:26.251 Latency(us) 00:18:26.251 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.251 =================================================================================================================== 00:18:26.251 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:26.251 08:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:26.251 08:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:26.251 08:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 80529' 00:18:26.251 08:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 80529 00:18:26.251 08:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 80529 00:18:26.251 08:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:26.251 08:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:26.251 08:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:26.251 08:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:26.251 08:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:26.251 08:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:26.251 08:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80589 00:18:26.251 08:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80589 /var/tmp/bperf.sock 00:18:26.251 08:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 80589 ']' 00:18:26.251 08:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:26.251 08:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:26.251 08:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:26.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:26.251 08:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:26.251 08:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:26.511 [2024-06-10 08:13:48.148292] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:18:26.511 [2024-06-10 08:13:48.148675] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80589 ] 00:18:26.511 [2024-06-10 08:13:48.282260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.770 [2024-06-10 08:13:48.401851] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.770 [2024-06-10 08:13:48.458118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:27.338 08:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:27.338 08:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:18:27.338 08:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:27.338 08:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:27.597 08:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:27.597 08:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.597 08:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:27.597 08:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.597 08:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:27.597 08:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:27.857 nvme0n1 00:18:27.857 08:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:27.857 08:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.857 08:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:28.116 08:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.116 08:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:28.116 08:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:28.116 Running I/O for 2 seconds... 00:18:28.116 [2024-06-10 08:13:49.852201] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190fef90 00:18:28.116 [2024-06-10 08:13:49.854903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.116 [2024-06-10 08:13:49.854960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.116 [2024-06-10 08:13:49.868653] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190feb58 00:18:28.116 [2024-06-10 08:13:49.871250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.116 [2024-06-10 08:13:49.871286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:28.116 [2024-06-10 08:13:49.885431] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190fe2e8 00:18:28.116 [2024-06-10 08:13:49.888034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.116 [2024-06-10 08:13:49.888070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:28.116 [2024-06-10 08:13:49.902090] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190fda78 00:18:28.116 [2024-06-10 08:13:49.904643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.116 [2024-06-10 08:13:49.904677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:28.116 [2024-06-10 08:13:49.918503] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190fd208 00:18:28.116 [2024-06-10 08:13:49.921073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.116 [2024-06-10 08:13:49.921109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:28.116 [2024-06-10 08:13:49.934865] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190fc998 00:18:28.116 [2024-06-10 08:13:49.937361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.116 [2024-06-10 08:13:49.937395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:28.116 [2024-06-10 08:13:49.950963] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190fc128 00:18:28.116 [2024-06-10 08:13:49.953457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.116 [2024-06-10 08:13:49.953515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:28.116 [2024-06-10 08:13:49.967301] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190fb8b8 00:18:28.116 [2024-06-10 08:13:49.969816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.116 [2024-06-10 08:13:49.969873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:28.376 [2024-06-10 08:13:49.984124] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190fb048 00:18:28.376 [2024-06-10 08:13:49.986551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.376 [2024-06-10 08:13:49.986585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:28.376 [2024-06-10 08:13:50.000666] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190fa7d8 00:18:28.376 [2024-06-10 08:13:50.003175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.376 [2024-06-10 08:13:50.003209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:28.376 [2024-06-10 08:13:50.017263] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f9f68 00:18:28.376 [2024-06-10 08:13:50.019708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.376 [2024-06-10 08:13:50.019741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:28.376 [2024-06-10 08:13:50.033731] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f96f8 00:18:28.376 [2024-06-10 08:13:50.036004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.376 [2024-06-10 08:13:50.036037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:28.376 [2024-06-10 08:13:50.050106] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f8e88 00:18:28.376 [2024-06-10 08:13:50.052525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.376 [2024-06-10 08:13:50.052558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:28.376 [2024-06-10 08:13:50.066444] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f8618 00:18:28.376 [2024-06-10 08:13:50.068810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.377 [2024-06-10 08:13:50.068843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:28.377 [2024-06-10 08:13:50.082670] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f7da8 00:18:28.377 [2024-06-10 08:13:50.085068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.377 [2024-06-10 08:13:50.085103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:28.377 [2024-06-10 08:13:50.099420] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f7538 00:18:28.377 [2024-06-10 08:13:50.101749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.377 [2024-06-10 08:13:50.101809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:28.377 [2024-06-10 08:13:50.116150] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f6cc8 00:18:28.377 [2024-06-10 08:13:50.118430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.377 [2024-06-10 08:13:50.118463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.377 [2024-06-10 08:13:50.132749] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f6458 00:18:28.377 [2024-06-10 08:13:50.135078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.377 [2024-06-10 08:13:50.135112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:28.377 [2024-06-10 08:13:50.149240] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f5be8 00:18:28.377 [2024-06-10 08:13:50.151390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.377 [2024-06-10 08:13:50.151423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:28.377 [2024-06-10 08:13:50.165417] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f5378 00:18:28.377 [2024-06-10 08:13:50.167694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.377 [2024-06-10 08:13:50.167729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:28.377 [2024-06-10 08:13:50.182189] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f4b08 00:18:28.377 [2024-06-10 08:13:50.184363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.377 [2024-06-10 08:13:50.184396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:28.377 [2024-06-10 08:13:50.198525] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f4298 00:18:28.377 [2024-06-10 08:13:50.200654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.377 [2024-06-10 08:13:50.200687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:28.377 [2024-06-10 08:13:50.214969] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f3a28 00:18:28.377 [2024-06-10 08:13:50.217133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.377 [2024-06-10 08:13:50.217169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:28.377 [2024-06-10 08:13:50.231486] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f31b8 00:18:28.377 [2024-06-10 08:13:50.233748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.377 [2024-06-10 08:13:50.233802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:28.636 [2024-06-10 08:13:50.248592] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f2948 00:18:28.636 [2024-06-10 08:13:50.250758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.636 [2024-06-10 08:13:50.250818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:28.636 [2024-06-10 08:13:50.264962] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f20d8 00:18:28.636 [2024-06-10 08:13:50.266992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.636 [2024-06-10 08:13:50.267027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:28.636 [2024-06-10 08:13:50.280640] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f1868 00:18:28.636 [2024-06-10 08:13:50.282595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.636 [2024-06-10 08:13:50.282628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:28.636 [2024-06-10 08:13:50.297059] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f0ff8 00:18:28.636 [2024-06-10 08:13:50.299139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.636 [2024-06-10 08:13:50.299173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:28.636 [2024-06-10 08:13:50.313411] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f0788 00:18:28.636 [2024-06-10 08:13:50.315524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.636 [2024-06-10 08:13:50.315556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:28.636 [2024-06-10 08:13:50.330045] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190eff18 00:18:28.636 [2024-06-10 08:13:50.332344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.636 [2024-06-10 08:13:50.332380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:28.636 [2024-06-10 08:13:50.346559] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190ef6a8 00:18:28.636 [2024-06-10 08:13:50.348569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.636 [2024-06-10 08:13:50.348603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:28.636 [2024-06-10 08:13:50.362893] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190eee38 00:18:28.636 [2024-06-10 08:13:50.364851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.636 [2024-06-10 08:13:50.364925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:28.636 [2024-06-10 08:13:50.378680] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190ee5c8 00:18:28.637 [2024-06-10 08:13:50.380653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.637 [2024-06-10 08:13:50.380686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.637 [2024-06-10 08:13:50.394912] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190edd58 00:18:28.637 [2024-06-10 08:13:50.396730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.637 [2024-06-10 08:13:50.396763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:28.637 [2024-06-10 08:13:50.410890] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190ed4e8 00:18:28.637 [2024-06-10 08:13:50.412804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.637 [2024-06-10 08:13:50.412844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:28.637 [2024-06-10 08:13:50.427073] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190ecc78 00:18:28.637 [2024-06-10 08:13:50.429010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.637 [2024-06-10 08:13:50.429046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:28.637 [2024-06-10 08:13:50.443397] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190ec408 00:18:28.637 [2024-06-10 08:13:50.445341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.637 [2024-06-10 08:13:50.445373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:28.637 [2024-06-10 08:13:50.460168] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190ebb98 00:18:28.637 [2024-06-10 08:13:50.462055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.637 [2024-06-10 08:13:50.462088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:28.637 [2024-06-10 08:13:50.476612] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190eb328 00:18:28.637 [2024-06-10 08:13:50.478429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.637 [2024-06-10 08:13:50.478462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:28.637 [2024-06-10 08:13:50.492907] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190eaab8 00:18:28.637 [2024-06-10 08:13:50.494707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.637 [2024-06-10 08:13:50.494740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:28.896 [2024-06-10 08:13:50.509739] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190ea248 00:18:28.896 [2024-06-10 08:13:50.511527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.896 [2024-06-10 08:13:50.511578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:28.896 [2024-06-10 08:13:50.526448] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e99d8 00:18:28.896 [2024-06-10 08:13:50.528233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.896 [2024-06-10 08:13:50.528265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:28.896 [2024-06-10 08:13:50.542993] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e9168 00:18:28.896 [2024-06-10 08:13:50.544699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.896 [2024-06-10 08:13:50.544732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:28.896 [2024-06-10 08:13:50.559297] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e88f8 00:18:28.896 [2024-06-10 08:13:50.561065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.896 [2024-06-10 08:13:50.561100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:28.896 [2024-06-10 08:13:50.575555] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e8088 00:18:28.897 [2024-06-10 08:13:50.577294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.897 [2024-06-10 08:13:50.577341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:28.897 [2024-06-10 08:13:50.591270] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e7818 00:18:28.897 [2024-06-10 08:13:50.592927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.897 [2024-06-10 08:13:50.592963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:28.897 [2024-06-10 08:13:50.607001] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e6fa8 00:18:28.897 [2024-06-10 08:13:50.608616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.897 [2024-06-10 08:13:50.608650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:28.897 [2024-06-10 08:13:50.622870] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e6738 00:18:28.897 [2024-06-10 08:13:50.624365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.897 [2024-06-10 08:13:50.624398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:28.897 [2024-06-10 08:13:50.638250] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e5ec8 00:18:28.897 [2024-06-10 08:13:50.639794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.897 [2024-06-10 08:13:50.639851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.897 [2024-06-10 08:13:50.653958] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e5658 00:18:28.897 [2024-06-10 08:13:50.655478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.897 [2024-06-10 08:13:50.655511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:28.897 [2024-06-10 08:13:50.669705] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e4de8 00:18:28.897 [2024-06-10 08:13:50.671210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.897 [2024-06-10 08:13:50.671244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:28.897 [2024-06-10 08:13:50.686059] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e4578 00:18:28.897 [2024-06-10 08:13:50.687624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.897 [2024-06-10 08:13:50.687657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:28.897 [2024-06-10 08:13:50.702317] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e3d08 00:18:28.897 [2024-06-10 08:13:50.703895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.897 [2024-06-10 08:13:50.703925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:28.897 [2024-06-10 08:13:50.718731] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e3498 00:18:28.897 [2024-06-10 08:13:50.720291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.897 [2024-06-10 08:13:50.720324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:28.897 [2024-06-10 08:13:50.735254] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e2c28 00:18:28.897 [2024-06-10 08:13:50.736733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.897 [2024-06-10 08:13:50.736766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:28.897 [2024-06-10 08:13:50.751513] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e23b8 00:18:28.897 [2024-06-10 08:13:50.753008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.897 [2024-06-10 08:13:50.753043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:29.157 [2024-06-10 08:13:50.768223] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e1b48 00:18:29.157 [2024-06-10 08:13:50.769715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.157 [2024-06-10 08:13:50.769748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:29.157 [2024-06-10 08:13:50.784424] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e12d8 00:18:29.157 [2024-06-10 08:13:50.785976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.157 [2024-06-10 08:13:50.786010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:29.157 [2024-06-10 08:13:50.800901] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e0a68 00:18:29.157 [2024-06-10 08:13:50.802368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.157 [2024-06-10 08:13:50.802416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:29.157 [2024-06-10 08:13:50.817389] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e01f8 00:18:29.157 [2024-06-10 08:13:50.818802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.157 [2024-06-10 08:13:50.818869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:29.157 [2024-06-10 08:13:50.833660] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190df988 00:18:29.157 [2024-06-10 08:13:50.835094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.157 [2024-06-10 08:13:50.835127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:29.157 [2024-06-10 08:13:50.850177] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190df118 00:18:29.157 [2024-06-10 08:13:50.851534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.157 [2024-06-10 08:13:50.851567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:29.157 [2024-06-10 08:13:50.866066] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190de8a8 00:18:29.157 [2024-06-10 08:13:50.867297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.157 [2024-06-10 08:13:50.867332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:29.157 [2024-06-10 08:13:50.882342] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190de038 00:18:29.157 [2024-06-10 08:13:50.883713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.157 [2024-06-10 08:13:50.883746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:29.157 [2024-06-10 08:13:50.905546] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190de038 00:18:29.157 [2024-06-10 08:13:50.908152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.157 [2024-06-10 08:13:50.908185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.157 [2024-06-10 08:13:50.921802] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190de8a8 00:18:29.157 [2024-06-10 08:13:50.924365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.157 [2024-06-10 08:13:50.924399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:29.157 [2024-06-10 08:13:50.938380] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190df118 00:18:29.157 [2024-06-10 08:13:50.940964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.157 [2024-06-10 08:13:50.941000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:29.157 [2024-06-10 08:13:50.954722] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190df988 00:18:29.157 [2024-06-10 08:13:50.957295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.157 [2024-06-10 08:13:50.957342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:29.157 [2024-06-10 08:13:50.971069] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e01f8 00:18:29.157 [2024-06-10 08:13:50.973590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.157 [2024-06-10 08:13:50.973623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:29.157 [2024-06-10 08:13:50.987238] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e0a68 00:18:29.157 [2024-06-10 08:13:50.989696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.157 [2024-06-10 08:13:50.989729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:29.157 [2024-06-10 08:13:51.003359] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e12d8 00:18:29.157 [2024-06-10 08:13:51.005946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.157 [2024-06-10 08:13:51.005991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:29.157 [2024-06-10 08:13:51.019347] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e1b48 00:18:29.157 [2024-06-10 08:13:51.021779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.157 [2024-06-10 08:13:51.021859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:29.417 [2024-06-10 08:13:51.035788] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e23b8 00:18:29.417 [2024-06-10 08:13:51.038292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.417 [2024-06-10 08:13:51.038324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:29.417 [2024-06-10 08:13:51.052290] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e2c28 00:18:29.417 [2024-06-10 08:13:51.054742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.417 [2024-06-10 08:13:51.054774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:29.417 [2024-06-10 08:13:51.068643] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e3498 00:18:29.417 [2024-06-10 08:13:51.071059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.417 [2024-06-10 08:13:51.071091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:29.417 [2024-06-10 08:13:51.085036] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e3d08 00:18:29.417 [2024-06-10 08:13:51.087427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.417 [2024-06-10 08:13:51.087459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:29.417 [2024-06-10 08:13:51.101387] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e4578 00:18:29.417 [2024-06-10 08:13:51.103792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.417 [2024-06-10 08:13:51.103855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:29.417 [2024-06-10 08:13:51.117520] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e4de8 00:18:29.417 [2024-06-10 08:13:51.119704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.417 [2024-06-10 08:13:51.119736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:29.417 [2024-06-10 08:13:51.133111] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e5658 00:18:29.417 [2024-06-10 08:13:51.135242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.417 [2024-06-10 08:13:51.135275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:29.417 [2024-06-10 08:13:51.149213] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e5ec8 00:18:29.417 [2024-06-10 08:13:51.151559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.417 [2024-06-10 08:13:51.151592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:29.417 [2024-06-10 08:13:51.165589] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e6738 00:18:29.417 [2024-06-10 08:13:51.167876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.417 [2024-06-10 08:13:51.167910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:29.417 [2024-06-10 08:13:51.182027] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e6fa8 00:18:29.417 [2024-06-10 08:13:51.184258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.418 [2024-06-10 08:13:51.184290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:29.418 [2024-06-10 08:13:51.198593] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e7818 00:18:29.418 [2024-06-10 08:13:51.200924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.418 [2024-06-10 08:13:51.200959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:29.418 [2024-06-10 08:13:51.214891] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e8088 00:18:29.418 [2024-06-10 08:13:51.216948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.418 [2024-06-10 08:13:51.216984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:29.418 [2024-06-10 08:13:51.230329] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e88f8 00:18:29.418 [2024-06-10 08:13:51.232430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.418 [2024-06-10 08:13:51.232461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:29.418 [2024-06-10 08:13:51.246604] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e9168 00:18:29.418 [2024-06-10 08:13:51.248813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.418 [2024-06-10 08:13:51.248853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:29.418 [2024-06-10 08:13:51.262872] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190e99d8 00:18:29.418 [2024-06-10 08:13:51.264994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.418 [2024-06-10 08:13:51.265031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:29.418 [2024-06-10 08:13:51.278770] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190ea248 00:18:29.418 [2024-06-10 08:13:51.280714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.418 [2024-06-10 08:13:51.280751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:29.677 [2024-06-10 08:13:51.294997] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190eaab8 00:18:29.677 [2024-06-10 08:13:51.297095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.677 [2024-06-10 08:13:51.297132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:29.677 [2024-06-10 08:13:51.311689] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190eb328 00:18:29.677 [2024-06-10 08:13:51.313898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.677 [2024-06-10 08:13:51.313934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:29.677 [2024-06-10 08:13:51.328389] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190ebb98 00:18:29.677 [2024-06-10 08:13:51.330523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.677 [2024-06-10 08:13:51.330557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:29.677 [2024-06-10 08:13:51.344835] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190ec408 00:18:29.677 [2024-06-10 08:13:51.346955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.677 [2024-06-10 08:13:51.347013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:29.677 [2024-06-10 08:13:51.360773] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190ecc78 00:18:29.677 [2024-06-10 08:13:51.362708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.677 [2024-06-10 08:13:51.362742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:29.677 [2024-06-10 08:13:51.377326] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190ed4e8 00:18:29.677 [2024-06-10 08:13:51.379399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.677 [2024-06-10 08:13:51.379433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:29.677 [2024-06-10 08:13:51.393948] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190edd58 00:18:29.677 [2024-06-10 08:13:51.395946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.677 [2024-06-10 08:13:51.395981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:29.677 [2024-06-10 08:13:51.410139] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190ee5c8 00:18:29.677 [2024-06-10 08:13:51.412035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.677 [2024-06-10 08:13:51.412071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:29.677 [2024-06-10 08:13:51.425708] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190eee38 00:18:29.677 [2024-06-10 08:13:51.427498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.677 [2024-06-10 08:13:51.427532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:29.677 [2024-06-10 08:13:51.441601] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190ef6a8 00:18:29.677 [2024-06-10 08:13:51.443522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.677 [2024-06-10 08:13:51.443564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:29.677 [2024-06-10 08:13:51.458135] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190eff18 00:18:29.677 [2024-06-10 08:13:51.460065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.678 [2024-06-10 08:13:51.460098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:29.678 [2024-06-10 08:13:51.474577] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f0788 00:18:29.678 [2024-06-10 08:13:51.476405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.678 [2024-06-10 08:13:51.476436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:29.678 [2024-06-10 08:13:51.490520] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f0ff8 00:18:29.678 [2024-06-10 08:13:51.492230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.678 [2024-06-10 08:13:51.492264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:29.678 [2024-06-10 08:13:51.506262] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f1868 00:18:29.678 [2024-06-10 08:13:51.508145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.678 [2024-06-10 08:13:51.508179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:29.678 [2024-06-10 08:13:51.522684] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f20d8 00:18:29.678 [2024-06-10 08:13:51.524584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.678 [2024-06-10 08:13:51.524619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:29.678 [2024-06-10 08:13:51.539513] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f2948 00:18:29.678 [2024-06-10 08:13:51.541323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.678 [2024-06-10 08:13:51.541359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:29.937 [2024-06-10 08:13:51.555765] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f31b8 00:18:29.937 [2024-06-10 08:13:51.557529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.937 [2024-06-10 08:13:51.557577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:29.937 [2024-06-10 08:13:51.571499] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f3a28 00:18:29.937 [2024-06-10 08:13:51.573164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.937 [2024-06-10 08:13:51.573215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:29.937 [2024-06-10 08:13:51.587758] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f4298 00:18:29.937 [2024-06-10 08:13:51.589539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.937 [2024-06-10 08:13:51.589588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:29.937 [2024-06-10 08:13:51.604060] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f4b08 00:18:29.937 [2024-06-10 08:13:51.605795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.937 [2024-06-10 08:13:51.605859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:29.937 [2024-06-10 08:13:51.619912] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f5378 00:18:29.937 [2024-06-10 08:13:51.621545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.937 [2024-06-10 08:13:51.621594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:29.937 [2024-06-10 08:13:51.635920] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f5be8 00:18:29.937 [2024-06-10 08:13:51.637690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.937 [2024-06-10 08:13:51.637737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:29.937 [2024-06-10 08:13:51.652468] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f6458 00:18:29.937 [2024-06-10 08:13:51.654189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.937 [2024-06-10 08:13:51.654237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:29.937 [2024-06-10 08:13:51.668595] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f6cc8 00:18:29.937 [2024-06-10 08:13:51.670314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.937 [2024-06-10 08:13:51.670377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:29.938 [2024-06-10 08:13:51.685198] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f7538 00:18:29.938 [2024-06-10 08:13:51.686935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.938 [2024-06-10 08:13:51.686983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:29.938 [2024-06-10 08:13:51.701549] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f7da8 00:18:29.938 [2024-06-10 08:13:51.703170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.938 [2024-06-10 08:13:51.703221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:29.938 [2024-06-10 08:13:51.717688] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f8618 00:18:29.938 [2024-06-10 08:13:51.719318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.938 [2024-06-10 08:13:51.719367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:29.938 [2024-06-10 08:13:51.734101] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f8e88 00:18:29.938 [2024-06-10 08:13:51.735594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.938 [2024-06-10 08:13:51.735642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:29.938 [2024-06-10 08:13:51.750103] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f96f8 00:18:29.938 [2024-06-10 08:13:51.751541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.938 [2024-06-10 08:13:51.751590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:29.938 [2024-06-10 08:13:51.766558] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190f9f68 00:18:29.938 [2024-06-10 08:13:51.768159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.938 [2024-06-10 08:13:51.768207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:29.938 [2024-06-10 08:13:51.782967] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190fa7d8 00:18:29.938 [2024-06-10 08:13:51.784439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.938 [2024-06-10 08:13:51.784487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:29.938 [2024-06-10 08:13:51.799433] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190fb048 00:18:29.938 [2024-06-10 08:13:51.800889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.938 [2024-06-10 08:13:51.800923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:30.198 [2024-06-10 08:13:51.815930] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b580) with pdu=0x2000190fb8b8 00:18:30.198 [2024-06-10 08:13:51.817420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.198 [2024-06-10 08:13:51.817468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:30.198 00:18:30.198 Latency(us) 00:18:30.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.198 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:30.198 nvme0n1 : 2.00 15487.93 60.50 0.00 0.00 8256.74 7298.33 31457.28 00:18:30.198 =================================================================================================================== 00:18:30.198 Total : 15487.93 60.50 0.00 0.00 8256.74 7298.33 31457.28 00:18:30.198 0 00:18:30.198 08:13:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:30.198 08:13:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:30.198 08:13:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:30.198 08:13:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:30.198 | .driver_specific 00:18:30.198 | .nvme_error 00:18:30.198 | .status_code 00:18:30.198 | .command_transient_transport_error' 00:18:30.462 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 121 > 0 )) 00:18:30.462 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80589 00:18:30.462 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 80589 ']' 00:18:30.462 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 80589 00:18:30.462 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:18:30.462 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:30.462 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 80589 00:18:30.462 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:30.462 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:30.462 killing process with pid 80589 00:18:30.462 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 80589' 00:18:30.462 Received shutdown signal, test time was about 2.000000 seconds 00:18:30.462 00:18:30.462 Latency(us) 00:18:30.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.462 =================================================================================================================== 00:18:30.462 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.462 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 80589 00:18:30.462 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 80589 00:18:30.720 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:30.720 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:30.720 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:30.720 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:30.720 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:30.720 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80644 00:18:30.720 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80644 /var/tmp/bperf.sock 00:18:30.720 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:30.720 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 80644 ']' 00:18:30.720 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:30.720 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:30.720 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:30.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:30.720 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:30.720 08:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:30.720 [2024-06-10 08:13:52.399578] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:18:30.720 [2024-06-10 08:13:52.399706] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80644 ] 00:18:30.720 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:30.720 Zero copy mechanism will not be used. 00:18:30.720 [2024-06-10 08:13:52.532273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.979 [2024-06-10 08:13:52.651593] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.979 [2024-06-10 08:13:52.706667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:31.548 08:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:31.548 08:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:18:31.548 08:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:31.548 08:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:31.807 08:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:31.807 08:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.807 08:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:31.807 08:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.807 08:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:31.807 08:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:32.066 nvme0n1 00:18:32.066 08:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:32.066 08:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.066 08:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.066 08:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.066 08:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:32.066 08:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:32.326 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:32.327 Zero copy mechanism will not be used. 00:18:32.327 Running I/O for 2 seconds... 00:18:32.327 [2024-06-10 08:13:54.015614] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.015995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.016030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.021850] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.022178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.022213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.027899] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.028239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.028275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.033914] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.034244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.034278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.039882] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.040201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.040234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.045821] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.046153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.046186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.051786] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.052122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.052162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.057932] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.058272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.058311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.063958] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.064279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.064314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.070036] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.070354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.070389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.076183] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.076500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.076542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.082393] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.082728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.082775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.088401] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.088702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.088726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.094436] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.094747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.094867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.100555] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.100891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.100921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.106674] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.107006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.107039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.112687] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.113048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.113079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.118806] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.119129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.119173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.124775] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.125130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.125162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.130824] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.131157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.131197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.136736] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.137083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.137128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.142834] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.143156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.143186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.148824] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.149144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.149174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.154692] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.155028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.155064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.160617] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.160958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.160987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.166611] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.166950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.327 [2024-06-10 08:13:54.166979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.327 [2024-06-10 08:13:54.172573] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.327 [2024-06-10 08:13:54.172906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.328 [2024-06-10 08:13:54.172931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.328 [2024-06-10 08:13:54.178510] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.328 [2024-06-10 08:13:54.178847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.328 [2024-06-10 08:13:54.178878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.328 [2024-06-10 08:13:54.184618] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.328 [2024-06-10 08:13:54.184969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.328 [2024-06-10 08:13:54.184997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.328 [2024-06-10 08:13:54.190715] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.328 [2024-06-10 08:13:54.191062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.328 [2024-06-10 08:13:54.191091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.588 [2024-06-10 08:13:54.196890] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.588 [2024-06-10 08:13:54.197208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.588 [2024-06-10 08:13:54.197244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.588 [2024-06-10 08:13:54.202860] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.588 [2024-06-10 08:13:54.203178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.588 [2024-06-10 08:13:54.203206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.588 [2024-06-10 08:13:54.208807] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.588 [2024-06-10 08:13:54.209154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.588 [2024-06-10 08:13:54.209189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.588 [2024-06-10 08:13:54.214773] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.588 [2024-06-10 08:13:54.215116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.588 [2024-06-10 08:13:54.215141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.588 [2024-06-10 08:13:54.221069] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.588 [2024-06-10 08:13:54.221373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.588 [2024-06-10 08:13:54.221436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.588 [2024-06-10 08:13:54.227031] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.588 [2024-06-10 08:13:54.227360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.588 [2024-06-10 08:13:54.227384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.588 [2024-06-10 08:13:54.233091] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.588 [2024-06-10 08:13:54.233395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.588 [2024-06-10 08:13:54.233426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.588 [2024-06-10 08:13:54.239214] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.588 [2024-06-10 08:13:54.239547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.588 [2024-06-10 08:13:54.239585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.588 [2024-06-10 08:13:54.245239] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.245571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.245603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.251237] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.251579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.251609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.257256] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.257567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.257591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.262899] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.262998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.263022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.268874] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.268967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.268990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.274740] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.274849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.274872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.280733] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.280843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.280875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.286900] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.287004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.287026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.292992] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.293073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.293096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.298925] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.299021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.299045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.304864] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.304965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.304988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.310717] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.310834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.310865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.316679] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.316767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.316789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.322779] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.322884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.322905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.328620] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.328730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.328753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.334615] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.334715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.334738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.340536] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.340635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.340657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.346437] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.346539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.346561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.352374] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.352471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.352493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.358275] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.358371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.358393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.364212] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.364311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.364334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.370119] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.370216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.370239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.376073] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.376172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.376194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.382047] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.382127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.382149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.388208] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.388303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.388326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.394342] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.394443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.394465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.589 [2024-06-10 08:13:54.400236] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.589 [2024-06-10 08:13:54.400335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.589 [2024-06-10 08:13:54.400357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.590 [2024-06-10 08:13:54.406353] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.590 [2024-06-10 08:13:54.406452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.590 [2024-06-10 08:13:54.406475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.590 [2024-06-10 08:13:54.412371] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.590 [2024-06-10 08:13:54.412466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.590 [2024-06-10 08:13:54.412489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.590 [2024-06-10 08:13:54.418294] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.590 [2024-06-10 08:13:54.418386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.590 [2024-06-10 08:13:54.418408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.590 [2024-06-10 08:13:54.424364] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.590 [2024-06-10 08:13:54.424464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.590 [2024-06-10 08:13:54.424487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.590 [2024-06-10 08:13:54.430493] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.590 [2024-06-10 08:13:54.430591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.590 [2024-06-10 08:13:54.430614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.590 [2024-06-10 08:13:54.436649] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.590 [2024-06-10 08:13:54.436747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.590 [2024-06-10 08:13:54.436769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.590 [2024-06-10 08:13:54.442738] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.590 [2024-06-10 08:13:54.442841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.590 [2024-06-10 08:13:54.442863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.590 [2024-06-10 08:13:54.448586] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.590 [2024-06-10 08:13:54.448692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.590 [2024-06-10 08:13:54.448715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.851 [2024-06-10 08:13:54.454832] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.851 [2024-06-10 08:13:54.454948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.851 [2024-06-10 08:13:54.454971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.851 [2024-06-10 08:13:54.460898] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.851 [2024-06-10 08:13:54.460974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.851 [2024-06-10 08:13:54.460996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.851 [2024-06-10 08:13:54.466858] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.851 [2024-06-10 08:13:54.466957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.851 [2024-06-10 08:13:54.466980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.851 [2024-06-10 08:13:54.472863] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.851 [2024-06-10 08:13:54.472960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.851 [2024-06-10 08:13:54.472983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.851 [2024-06-10 08:13:54.478894] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.851 [2024-06-10 08:13:54.478987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.851 [2024-06-10 08:13:54.479010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.851 [2024-06-10 08:13:54.484973] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.851 [2024-06-10 08:13:54.485061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.851 [2024-06-10 08:13:54.485084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.851 [2024-06-10 08:13:54.491020] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.851 [2024-06-10 08:13:54.491120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.851 [2024-06-10 08:13:54.491142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.851 [2024-06-10 08:13:54.496844] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.851 [2024-06-10 08:13:54.496955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.851 [2024-06-10 08:13:54.496978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.851 [2024-06-10 08:13:54.503091] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.851 [2024-06-10 08:13:54.503198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.851 [2024-06-10 08:13:54.503222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.851 [2024-06-10 08:13:54.509300] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.851 [2024-06-10 08:13:54.509401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.851 [2024-06-10 08:13:54.509423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.851 [2024-06-10 08:13:54.515667] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.851 [2024-06-10 08:13:54.515766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.851 [2024-06-10 08:13:54.515788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.851 [2024-06-10 08:13:54.521962] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.851 [2024-06-10 08:13:54.522062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.522085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.528054] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.528152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.528176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.534163] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.534251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.534274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.540313] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.540410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.540433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.546635] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.546737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.546759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.552899] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.552983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.553006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.559134] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.559237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.559259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.565556] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.565653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.565676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.571759] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.571881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.571909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.578008] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.578112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.578134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.584051] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.584167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.584189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.590349] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.590449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.590471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.596723] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.596829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.596898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.603043] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.603120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.603143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.609133] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.609212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.609235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.615351] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.615455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.615477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.621621] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.621723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.621745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.627977] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.628056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.628078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.634142] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.634232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.634254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.640384] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.640482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.640504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.646616] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.646711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.646734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.652668] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.652769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.652791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.658670] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.658770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.658793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.664678] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.664763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.664785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.670673] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.670771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.670793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.676678] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.676779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.676801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.682852] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.682954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.852 [2024-06-10 08:13:54.682976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.852 [2024-06-10 08:13:54.688962] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.852 [2024-06-10 08:13:54.689050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.853 [2024-06-10 08:13:54.689073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.853 [2024-06-10 08:13:54.695181] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.853 [2024-06-10 08:13:54.695282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.853 [2024-06-10 08:13:54.695304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.853 [2024-06-10 08:13:54.701373] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.853 [2024-06-10 08:13:54.701473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.853 [2024-06-10 08:13:54.701496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.853 [2024-06-10 08:13:54.707737] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.853 [2024-06-10 08:13:54.707860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.853 [2024-06-10 08:13:54.707882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.853 [2024-06-10 08:13:54.713906] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:32.853 [2024-06-10 08:13:54.714036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.853 [2024-06-10 08:13:54.714059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.114 [2024-06-10 08:13:54.720132] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.114 [2024-06-10 08:13:54.720233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.114 [2024-06-10 08:13:54.720256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.114 [2024-06-10 08:13:54.726428] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.114 [2024-06-10 08:13:54.726528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.114 [2024-06-10 08:13:54.726551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.114 [2024-06-10 08:13:54.732533] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.114 [2024-06-10 08:13:54.732634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.114 [2024-06-10 08:13:54.732657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.114 [2024-06-10 08:13:54.738500] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.114 [2024-06-10 08:13:54.738628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.114 [2024-06-10 08:13:54.738650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.114 [2024-06-10 08:13:54.744590] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.114 [2024-06-10 08:13:54.744687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.114 [2024-06-10 08:13:54.744709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.114 [2024-06-10 08:13:54.750646] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.114 [2024-06-10 08:13:54.750759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.114 [2024-06-10 08:13:54.750781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.114 [2024-06-10 08:13:54.756692] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.114 [2024-06-10 08:13:54.756790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.114 [2024-06-10 08:13:54.756825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.114 [2024-06-10 08:13:54.762639] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.114 [2024-06-10 08:13:54.762738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.114 [2024-06-10 08:13:54.762761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.114 [2024-06-10 08:13:54.768675] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.114 [2024-06-10 08:13:54.768786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.114 [2024-06-10 08:13:54.768809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.114 [2024-06-10 08:13:54.774930] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.114 [2024-06-10 08:13:54.775039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.114 [2024-06-10 08:13:54.775061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.114 [2024-06-10 08:13:54.781127] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.114 [2024-06-10 08:13:54.781217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.114 [2024-06-10 08:13:54.781240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.114 [2024-06-10 08:13:54.787206] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.114 [2024-06-10 08:13:54.787317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.114 [2024-06-10 08:13:54.787339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.114 [2024-06-10 08:13:54.793356] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.114 [2024-06-10 08:13:54.793453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.114 [2024-06-10 08:13:54.793475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.114 [2024-06-10 08:13:54.799336] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.114 [2024-06-10 08:13:54.799441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.114 [2024-06-10 08:13:54.799463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.114 [2024-06-10 08:13:54.805397] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.114 [2024-06-10 08:13:54.805512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.114 [2024-06-10 08:13:54.805535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.114 [2024-06-10 08:13:54.811408] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.114 [2024-06-10 08:13:54.811520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.114 [2024-06-10 08:13:54.811542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.114 [2024-06-10 08:13:54.817392] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.114 [2024-06-10 08:13:54.817492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.114 [2024-06-10 08:13:54.817515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.114 [2024-06-10 08:13:54.823491] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.114 [2024-06-10 08:13:54.823598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.114 [2024-06-10 08:13:54.823620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.114 [2024-06-10 08:13:54.829668] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.114 [2024-06-10 08:13:54.829770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.114 [2024-06-10 08:13:54.829792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.114 [2024-06-10 08:13:54.835837] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.114 [2024-06-10 08:13:54.835964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.835986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.842012] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.842104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.842126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.848101] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.848202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.848224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.854408] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.854533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.854555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.860722] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.860856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.860897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.866900] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.867014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.867036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.872858] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.872977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.873000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.879033] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.879136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.879158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.885197] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.885270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.885293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.891337] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.891452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.891474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.897420] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.897514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.897537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.903611] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.903748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.903771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.909789] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.909932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.909955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.916034] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.916109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.916132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.922324] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.922447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.922470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.928518] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.928624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.928656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.934799] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.934899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.934922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.941018] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.941107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.941129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.947223] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.947347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.947378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.953567] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.953660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.953682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.959770] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.959932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.959954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.966414] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.966530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.966553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.972960] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.973064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.973086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.115 [2024-06-10 08:13:54.979585] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.115 [2024-06-10 08:13:54.979664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.115 [2024-06-10 08:13:54.979687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.376 [2024-06-10 08:13:54.986382] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.376 [2024-06-10 08:13:54.986455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.376 [2024-06-10 08:13:54.986478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.376 [2024-06-10 08:13:54.992851] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.376 [2024-06-10 08:13:54.992944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.376 [2024-06-10 08:13:54.992967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.376 [2024-06-10 08:13:54.999298] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.376 [2024-06-10 08:13:54.999395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.376 [2024-06-10 08:13:54.999418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.376 [2024-06-10 08:13:55.005676] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.376 [2024-06-10 08:13:55.005787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.376 [2024-06-10 08:13:55.005811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.376 [2024-06-10 08:13:55.012129] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.376 [2024-06-10 08:13:55.012244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.376 [2024-06-10 08:13:55.012266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.376 [2024-06-10 08:13:55.018640] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.376 [2024-06-10 08:13:55.018783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.376 [2024-06-10 08:13:55.018806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.376 [2024-06-10 08:13:55.025061] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.376 [2024-06-10 08:13:55.025150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.376 [2024-06-10 08:13:55.025173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.376 [2024-06-10 08:13:55.031517] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.376 [2024-06-10 08:13:55.031617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.376 [2024-06-10 08:13:55.031640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.376 [2024-06-10 08:13:55.038066] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.376 [2024-06-10 08:13:55.038168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.376 [2024-06-10 08:13:55.038191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.376 [2024-06-10 08:13:55.044534] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.376 [2024-06-10 08:13:55.044639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.376 [2024-06-10 08:13:55.044661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.376 [2024-06-10 08:13:55.051034] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.376 [2024-06-10 08:13:55.051138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.376 [2024-06-10 08:13:55.051161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.376 [2024-06-10 08:13:55.057608] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.376 [2024-06-10 08:13:55.057719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.376 [2024-06-10 08:13:55.057748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.376 [2024-06-10 08:13:55.064155] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.376 [2024-06-10 08:13:55.064263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.376 [2024-06-10 08:13:55.064294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.376 [2024-06-10 08:13:55.070777] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.376 [2024-06-10 08:13:55.070923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.376 [2024-06-10 08:13:55.070946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.376 [2024-06-10 08:13:55.077338] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.376 [2024-06-10 08:13:55.077438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.376 [2024-06-10 08:13:55.077461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.376 [2024-06-10 08:13:55.083757] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.083873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.083918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.090157] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.090232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.090255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.096659] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.096770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.096793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.103270] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.103395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.103418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.110028] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.110144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.110166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.116593] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.116709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.116731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.123203] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.123319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.123341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.129846] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.129986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.130009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.136279] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.136381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.136403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.142983] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.143121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.143142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.149524] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.149628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.149651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.156140] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.156254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.156276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.162677] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.162790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.162812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.169195] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.169333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.169355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.175754] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.175892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.175914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.182285] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.182427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.182448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.188878] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.189000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.189023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.195224] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.195355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.195377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.201885] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.201999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.202021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.208308] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.208418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.208441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.215003] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.215129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.215152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.221546] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.221658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.221681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.228081] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.228210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.228232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.234698] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.234825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.234860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.377 [2024-06-10 08:13:55.241683] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.377 [2024-06-10 08:13:55.241811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.377 [2024-06-10 08:13:55.241833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.638 [2024-06-10 08:13:55.248502] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.638 [2024-06-10 08:13:55.248599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.638 [2024-06-10 08:13:55.248623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.638 [2024-06-10 08:13:55.255047] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.638 [2024-06-10 08:13:55.255156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.638 [2024-06-10 08:13:55.255178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.638 [2024-06-10 08:13:55.261397] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.638 [2024-06-10 08:13:55.261509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.638 [2024-06-10 08:13:55.261532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.638 [2024-06-10 08:13:55.267911] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.638 [2024-06-10 08:13:55.268007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.638 [2024-06-10 08:13:55.268029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.638 [2024-06-10 08:13:55.274644] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.638 [2024-06-10 08:13:55.274771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.638 [2024-06-10 08:13:55.274794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.638 [2024-06-10 08:13:55.281453] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.638 [2024-06-10 08:13:55.281578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.638 [2024-06-10 08:13:55.281600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.638 [2024-06-10 08:13:55.288105] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.638 [2024-06-10 08:13:55.288224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.638 [2024-06-10 08:13:55.288247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.638 [2024-06-10 08:13:55.294424] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.638 [2024-06-10 08:13:55.294570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.638 [2024-06-10 08:13:55.294592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.638 [2024-06-10 08:13:55.301002] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.638 [2024-06-10 08:13:55.301098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.638 [2024-06-10 08:13:55.301121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.638 [2024-06-10 08:13:55.307303] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.638 [2024-06-10 08:13:55.307443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.638 [2024-06-10 08:13:55.307465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.638 [2024-06-10 08:13:55.313710] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.638 [2024-06-10 08:13:55.313828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.638 [2024-06-10 08:13:55.313880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.638 [2024-06-10 08:13:55.320019] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.638 [2024-06-10 08:13:55.320132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.638 [2024-06-10 08:13:55.320155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.638 [2024-06-10 08:13:55.326125] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.638 [2024-06-10 08:13:55.326235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.638 [2024-06-10 08:13:55.326257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.638 [2024-06-10 08:13:55.332105] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.638 [2024-06-10 08:13:55.332276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.638 [2024-06-10 08:13:55.332297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.638 [2024-06-10 08:13:55.338198] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.638 [2024-06-10 08:13:55.338296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.638 [2024-06-10 08:13:55.338319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.638 [2024-06-10 08:13:55.344255] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.638 [2024-06-10 08:13:55.344374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.638 [2024-06-10 08:13:55.344396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.638 [2024-06-10 08:13:55.350247] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.638 [2024-06-10 08:13:55.350342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.638 [2024-06-10 08:13:55.350364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.638 [2024-06-10 08:13:55.356460] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.638 [2024-06-10 08:13:55.356562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.638 [2024-06-10 08:13:55.356584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.638 [2024-06-10 08:13:55.362686] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.638 [2024-06-10 08:13:55.362803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.638 [2024-06-10 08:13:55.362838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.638 [2024-06-10 08:13:55.369092] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.638 [2024-06-10 08:13:55.369193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.638 [2024-06-10 08:13:55.369216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.638 [2024-06-10 08:13:55.375249] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.638 [2024-06-10 08:13:55.375369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.639 [2024-06-10 08:13:55.375391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.639 [2024-06-10 08:13:55.381550] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.639 [2024-06-10 08:13:55.381689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.639 [2024-06-10 08:13:55.381711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.639 [2024-06-10 08:13:55.388090] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.639 [2024-06-10 08:13:55.388187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.639 [2024-06-10 08:13:55.388209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.639 [2024-06-10 08:13:55.394470] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.639 [2024-06-10 08:13:55.394583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.639 [2024-06-10 08:13:55.394605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.639 [2024-06-10 08:13:55.401071] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.639 [2024-06-10 08:13:55.401164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.639 [2024-06-10 08:13:55.401193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.639 [2024-06-10 08:13:55.407466] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.639 [2024-06-10 08:13:55.407624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.639 [2024-06-10 08:13:55.407646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.639 [2024-06-10 08:13:55.414325] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.639 [2024-06-10 08:13:55.414489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.639 [2024-06-10 08:13:55.414522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.639 [2024-06-10 08:13:55.420951] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.639 [2024-06-10 08:13:55.421027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.639 [2024-06-10 08:13:55.421050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.639 [2024-06-10 08:13:55.427314] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.639 [2024-06-10 08:13:55.427439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.639 [2024-06-10 08:13:55.427461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.639 [2024-06-10 08:13:55.433549] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.639 [2024-06-10 08:13:55.433663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.639 [2024-06-10 08:13:55.433685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.639 [2024-06-10 08:13:55.439802] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.639 [2024-06-10 08:13:55.439949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.639 [2024-06-10 08:13:55.439972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.639 [2024-06-10 08:13:55.446081] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.639 [2024-06-10 08:13:55.446175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.639 [2024-06-10 08:13:55.446198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.639 [2024-06-10 08:13:55.452292] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.639 [2024-06-10 08:13:55.452414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.639 [2024-06-10 08:13:55.452436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.639 [2024-06-10 08:13:55.458635] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.639 [2024-06-10 08:13:55.458759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.639 [2024-06-10 08:13:55.458780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.639 [2024-06-10 08:13:55.464796] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.639 [2024-06-10 08:13:55.464924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.639 [2024-06-10 08:13:55.464947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.639 [2024-06-10 08:13:55.470954] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.639 [2024-06-10 08:13:55.471058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.639 [2024-06-10 08:13:55.471080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.639 [2024-06-10 08:13:55.476897] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.639 [2024-06-10 08:13:55.477009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.639 [2024-06-10 08:13:55.477032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.639 [2024-06-10 08:13:55.483180] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.639 [2024-06-10 08:13:55.483311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.639 [2024-06-10 08:13:55.483349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.639 [2024-06-10 08:13:55.489123] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.639 [2024-06-10 08:13:55.489224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.639 [2024-06-10 08:13:55.489257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.639 [2024-06-10 08:13:55.495264] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.639 [2024-06-10 08:13:55.495367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.639 [2024-06-10 08:13:55.495401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.639 [2024-06-10 08:13:55.501654] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.639 [2024-06-10 08:13:55.501769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.639 [2024-06-10 08:13:55.501792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.507728] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.507859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.507881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.513969] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.514062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.514090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.519988] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.520088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.520110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.526212] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.526326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.526348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.532373] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.532493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.532515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.538481] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.538589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.538610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.544521] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.544654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.544677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.550796] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.550973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.550996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.556813] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.556920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.556944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.562806] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.562921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.562944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.568757] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.568952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.568975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.574863] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.574993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.575015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.581105] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.581192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.581214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.587161] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.587265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.587287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.593380] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.593492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.593515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.599473] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.599581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.599603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.605661] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.605758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.605780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.611788] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.611944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.611966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.617951] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.618066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.618088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.624081] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.624161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.624183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.630242] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.630344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.630367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.636457] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.636597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.636619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.642832] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.642915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-06-10 08:13:55.642937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.900 [2024-06-10 08:13:55.648976] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.900 [2024-06-10 08:13:55.649077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.901 [2024-06-10 08:13:55.649101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.901 [2024-06-10 08:13:55.654929] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.901 [2024-06-10 08:13:55.655029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.901 [2024-06-10 08:13:55.655051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.901 [2024-06-10 08:13:55.660825] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.901 [2024-06-10 08:13:55.660971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.901 [2024-06-10 08:13:55.660993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.901 [2024-06-10 08:13:55.666878] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.901 [2024-06-10 08:13:55.667007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.901 [2024-06-10 08:13:55.667029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.901 [2024-06-10 08:13:55.673082] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.901 [2024-06-10 08:13:55.673155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.901 [2024-06-10 08:13:55.673177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.901 [2024-06-10 08:13:55.679077] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.901 [2024-06-10 08:13:55.679181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.901 [2024-06-10 08:13:55.679203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.901 [2024-06-10 08:13:55.685083] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.901 [2024-06-10 08:13:55.685210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.901 [2024-06-10 08:13:55.685232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.901 [2024-06-10 08:13:55.691018] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.901 [2024-06-10 08:13:55.691172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.901 [2024-06-10 08:13:55.691193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.901 [2024-06-10 08:13:55.697717] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.901 [2024-06-10 08:13:55.697828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.901 [2024-06-10 08:13:55.697851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.901 [2024-06-10 08:13:55.704154] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.901 [2024-06-10 08:13:55.704250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.901 [2024-06-10 08:13:55.704273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.901 [2024-06-10 08:13:55.710558] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.901 [2024-06-10 08:13:55.710708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.901 [2024-06-10 08:13:55.710730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.901 [2024-06-10 08:13:55.716865] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.901 [2024-06-10 08:13:55.716989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.901 [2024-06-10 08:13:55.717011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.901 [2024-06-10 08:13:55.723266] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.901 [2024-06-10 08:13:55.723364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.901 [2024-06-10 08:13:55.723387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.901 [2024-06-10 08:13:55.729616] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.901 [2024-06-10 08:13:55.729737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.901 [2024-06-10 08:13:55.729760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.901 [2024-06-10 08:13:55.736108] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.901 [2024-06-10 08:13:55.736203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.901 [2024-06-10 08:13:55.736226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.901 [2024-06-10 08:13:55.742817] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.901 [2024-06-10 08:13:55.742999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.901 [2024-06-10 08:13:55.743022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.901 [2024-06-10 08:13:55.749426] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.901 [2024-06-10 08:13:55.749535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.901 [2024-06-10 08:13:55.749559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.901 [2024-06-10 08:13:55.756128] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.901 [2024-06-10 08:13:55.756265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.901 [2024-06-10 08:13:55.756288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.901 [2024-06-10 08:13:55.762995] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:33.901 [2024-06-10 08:13:55.763120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.901 [2024-06-10 08:13:55.763142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.160 [2024-06-10 08:13:55.769503] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.160 [2024-06-10 08:13:55.769607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.160 [2024-06-10 08:13:55.769630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.160 [2024-06-10 08:13:55.776157] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.776251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.776274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.782865] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.783037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.783060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.789366] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.789490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.789519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.796164] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.796275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.796297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.802821] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.802993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.803016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.809331] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.809423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.809445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.815937] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.816075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.816097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.822546] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.822695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.822717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.829072] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.829190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.829212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.835687] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.835827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.835875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.842389] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.842549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.842571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.848899] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.849008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.849031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.855329] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.855430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.855452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.862156] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.862268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.862290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.868730] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.868835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.868880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.875230] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.875326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.875349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.881841] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.881947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.881970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.888311] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.888458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.888480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.894989] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.895130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.895152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.901726] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.901844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.901877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.908295] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.908394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.908416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.914999] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.915136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.915157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.921470] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.921662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.921684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.928189] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.928312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.928335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.934765] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.934979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.935000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.941402] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.941540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.941562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.947984] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.161 [2024-06-10 08:13:55.948112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-06-10 08:13:55.948135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.161 [2024-06-10 08:13:55.954543] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.162 [2024-06-10 08:13:55.954660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.162 [2024-06-10 08:13:55.954683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.162 [2024-06-10 08:13:55.961128] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.162 [2024-06-10 08:13:55.961241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.162 [2024-06-10 08:13:55.961275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.162 [2024-06-10 08:13:55.967953] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.162 [2024-06-10 08:13:55.968060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.162 [2024-06-10 08:13:55.968083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.162 [2024-06-10 08:13:55.974552] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.162 [2024-06-10 08:13:55.974659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.162 [2024-06-10 08:13:55.974682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.162 [2024-06-10 08:13:55.981098] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.162 [2024-06-10 08:13:55.981247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.162 [2024-06-10 08:13:55.981269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.162 [2024-06-10 08:13:55.987735] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.162 [2024-06-10 08:13:55.987874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.162 [2024-06-10 08:13:55.987913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.162 [2024-06-10 08:13:55.994306] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.162 [2024-06-10 08:13:55.994404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.162 [2024-06-10 08:13:55.994428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.162 [2024-06-10 08:13:56.000960] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.162 [2024-06-10 08:13:56.001051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.162 [2024-06-10 08:13:56.001074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.162 [2024-06-10 08:13:56.007438] tcp.c:2133:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x70b720) with pdu=0x2000190fef90 00:18:34.162 [2024-06-10 08:13:56.007581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.162 [2024-06-10 08:13:56.007603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.162 00:18:34.162 Latency(us) 00:18:34.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.162 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:34.162 nvme0n1 : 2.00 4944.49 618.06 0.00 0.00 3229.19 2442.71 6881.28 00:18:34.162 =================================================================================================================== 00:18:34.162 Total : 4944.49 618.06 0.00 0.00 3229.19 2442.71 6881.28 00:18:34.162 0 00:18:34.420 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:34.420 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:34.420 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:34.420 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:34.420 | .driver_specific 00:18:34.420 | .nvme_error 00:18:34.420 | .status_code 00:18:34.420 | .command_transient_transport_error' 00:18:34.678 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 319 > 0 )) 00:18:34.678 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80644 00:18:34.678 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 80644 ']' 00:18:34.678 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 80644 00:18:34.678 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:18:34.678 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:34.678 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 80644 00:18:34.678 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:34.678 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:34.678 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 80644' 00:18:34.678 killing process with pid 80644 00:18:34.678 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 80644 00:18:34.678 Received shutdown signal, test time was about 2.000000 seconds 00:18:34.678 00:18:34.678 Latency(us) 00:18:34.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.678 =================================================================================================================== 00:18:34.678 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:34.678 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 80644 00:18:34.936 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80442 00:18:34.936 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 80442 ']' 00:18:34.936 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 80442 00:18:34.936 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:18:34.936 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:34.936 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 80442 00:18:34.936 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:34.936 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:34.936 killing process with pid 80442 00:18:34.936 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 80442' 00:18:34.936 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 80442 00:18:34.936 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 80442 00:18:35.195 00:18:35.195 real 0m18.566s 00:18:35.195 user 0m34.666s 00:18:35.195 sys 0m5.702s 00:18:35.195 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:35.195 08:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:35.195 ************************************ 00:18:35.195 END TEST nvmf_digest_error 00:18:35.195 ************************************ 00:18:35.195 08:13:56 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:35.195 08:13:56 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:35.195 08:13:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:35.195 08:13:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:18:35.195 08:13:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:35.195 08:13:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:18:35.195 08:13:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:35.195 08:13:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:35.195 rmmod nvme_tcp 00:18:35.195 rmmod nvme_fabrics 00:18:35.195 rmmod nvme_keyring 00:18:35.195 08:13:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:35.195 08:13:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:18:35.195 08:13:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:18:35.195 08:13:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 80442 ']' 00:18:35.195 08:13:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 80442 00:18:35.195 08:13:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@949 -- # '[' -z 80442 ']' 00:18:35.195 08:13:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@953 -- # kill -0 80442 00:18:35.195 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 953: kill: (80442) - No such process 00:18:35.195 Process with pid 80442 is not found 00:18:35.195 08:13:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@976 -- # echo 'Process with pid 80442 is not found' 00:18:35.195 08:13:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:35.195 08:13:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:35.195 08:13:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:35.195 08:13:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:35.195 08:13:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:35.195 08:13:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.195 08:13:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.195 08:13:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.195 08:13:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:35.195 00:18:35.195 real 0m38.045s 00:18:35.195 user 1m9.746s 00:18:35.195 sys 0m11.835s 00:18:35.195 08:13:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:35.195 08:13:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:35.195 ************************************ 00:18:35.195 END TEST nvmf_digest 00:18:35.195 ************************************ 00:18:35.455 08:13:57 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:18:35.455 08:13:57 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:18:35.455 08:13:57 nvmf_tcp -- nvmf/nvmf.sh@116 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:35.455 08:13:57 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:35.455 08:13:57 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:35.455 08:13:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:35.455 ************************************ 00:18:35.455 START TEST nvmf_host_multipath 00:18:35.455 ************************************ 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:35.455 * Looking for test storage... 00:18:35.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:35.455 Cannot find device "nvmf_tgt_br" 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:18:35.455 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:35.456 Cannot find device "nvmf_tgt_br2" 00:18:35.456 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:18:35.456 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:35.456 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:35.456 Cannot find device "nvmf_tgt_br" 00:18:35.456 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:18:35.456 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:35.456 Cannot find device "nvmf_tgt_br2" 00:18:35.456 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:18:35.456 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:35.456 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:35.715 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:35.715 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:35.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:18:35.715 00:18:35.715 --- 10.0.0.2 ping statistics --- 00:18:35.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.715 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:35.715 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:35.715 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:18:35.715 00:18:35.715 --- 10.0.0.3 ping statistics --- 00:18:35.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.715 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:35.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:18:35.715 00:18:35.715 --- 10.0.0.1 ping statistics --- 00:18:35.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.715 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=80910 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 80910 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@830 -- # '[' -z 80910 ']' 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:35.715 08:13:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:35.974 [2024-06-10 08:13:57.603067] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:18:35.974 [2024-06-10 08:13:57.603193] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.974 [2024-06-10 08:13:57.741324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:36.233 [2024-06-10 08:13:57.880020] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.233 [2024-06-10 08:13:57.880096] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.233 [2024-06-10 08:13:57.880109] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.233 [2024-06-10 08:13:57.880118] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.233 [2024-06-10 08:13:57.880127] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.233 [2024-06-10 08:13:57.880545] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.233 [2024-06-10 08:13:57.880592] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.233 [2024-06-10 08:13:57.950942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:36.800 08:13:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:36.800 08:13:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@863 -- # return 0 00:18:36.800 08:13:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:36.800 08:13:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:36.800 08:13:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:36.800 08:13:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.800 08:13:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80910 00:18:36.800 08:13:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:37.058 [2024-06-10 08:13:58.864742] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.058 08:13:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:37.317 Malloc0 00:18:37.575 08:13:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:37.834 08:13:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:37.834 08:13:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:38.092 [2024-06-10 08:13:59.936166] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.092 08:13:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:38.351 [2024-06-10 08:14:00.156473] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:38.351 08:14:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80966 00:18:38.351 08:14:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:38.351 08:14:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:38.351 08:14:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80966 /var/tmp/bdevperf.sock 00:18:38.351 08:14:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@830 -- # '[' -z 80966 ']' 00:18:38.351 08:14:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:38.351 08:14:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:38.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:38.351 08:14:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:38.351 08:14:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:38.351 08:14:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:39.766 08:14:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:39.766 08:14:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@863 -- # return 0 00:18:39.766 08:14:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:39.766 08:14:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:40.024 Nvme0n1 00:18:40.024 08:14:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:40.283 Nvme0n1 00:18:40.283 08:14:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:40.283 08:14:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:41.660 08:14:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:41.660 08:14:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:41.660 08:14:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:41.920 08:14:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:41.920 08:14:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81011 00:18:41.920 08:14:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80910 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:41.920 08:14:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:48.488 08:14:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:48.488 08:14:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:48.488 08:14:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:48.488 08:14:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:48.488 Attaching 4 probes... 00:18:48.488 @path[10.0.0.2, 4421]: 17352 00:18:48.488 @path[10.0.0.2, 4421]: 17367 00:18:48.488 @path[10.0.0.2, 4421]: 17582 00:18:48.488 @path[10.0.0.2, 4421]: 18488 00:18:48.488 @path[10.0.0.2, 4421]: 19034 00:18:48.488 08:14:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:48.488 08:14:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:48.488 08:14:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:48.488 08:14:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:48.488 08:14:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:48.488 08:14:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:48.488 08:14:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81011 00:18:48.488 08:14:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:48.488 08:14:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:48.488 08:14:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:48.488 08:14:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:48.747 08:14:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:48.747 08:14:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81124 00:18:48.747 08:14:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80910 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:48.747 08:14:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:55.317 08:14:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:55.317 08:14:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:55.317 08:14:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:55.317 08:14:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:55.317 Attaching 4 probes... 00:18:55.317 @path[10.0.0.2, 4420]: 14930 00:18:55.317 @path[10.0.0.2, 4420]: 15370 00:18:55.317 @path[10.0.0.2, 4420]: 15017 00:18:55.317 @path[10.0.0.2, 4420]: 15070 00:18:55.317 @path[10.0.0.2, 4420]: 15592 00:18:55.317 08:14:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:55.317 08:14:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:55.317 08:14:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:55.317 08:14:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:55.317 08:14:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:55.317 08:14:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:55.317 08:14:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81124 00:18:55.317 08:14:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:55.317 08:14:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:55.317 08:14:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:55.317 08:14:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:55.576 08:14:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:55.576 08:14:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81242 00:18:55.576 08:14:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:55.576 08:14:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80910 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:02.145 08:14:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:02.145 08:14:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:02.145 08:14:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:02.145 08:14:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:02.145 Attaching 4 probes... 00:19:02.145 @path[10.0.0.2, 4421]: 15265 00:19:02.145 @path[10.0.0.2, 4421]: 17389 00:19:02.145 @path[10.0.0.2, 4421]: 18176 00:19:02.145 @path[10.0.0.2, 4421]: 18034 00:19:02.145 @path[10.0.0.2, 4421]: 18010 00:19:02.145 08:14:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:02.145 08:14:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:02.145 08:14:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:02.145 08:14:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:02.145 08:14:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:02.145 08:14:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:02.145 08:14:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81242 00:19:02.145 08:14:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:02.145 08:14:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:19:02.145 08:14:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:02.145 08:14:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:02.403 08:14:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:19:02.404 08:14:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81354 00:19:02.404 08:14:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80910 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:02.404 08:14:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:08.965 08:14:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:08.965 08:14:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:19:08.965 08:14:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:19:08.965 08:14:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:08.965 Attaching 4 probes... 00:19:08.965 00:19:08.965 00:19:08.965 00:19:08.965 00:19:08.965 00:19:08.965 08:14:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:08.965 08:14:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:08.965 08:14:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:08.965 08:14:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:19:08.965 08:14:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:19:08.965 08:14:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:19:08.965 08:14:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81354 00:19:08.966 08:14:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:08.966 08:14:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:19:08.966 08:14:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:08.966 08:14:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:09.224 08:14:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:19:09.224 08:14:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81467 00:19:09.224 08:14:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:09.224 08:14:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80910 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:15.800 08:14:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:15.800 08:14:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:15.800 08:14:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:15.800 08:14:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:15.800 Attaching 4 probes... 00:19:15.800 @path[10.0.0.2, 4421]: 17208 00:19:15.800 @path[10.0.0.2, 4421]: 17520 00:19:15.800 @path[10.0.0.2, 4421]: 17440 00:19:15.801 @path[10.0.0.2, 4421]: 17664 00:19:15.801 @path[10.0.0.2, 4421]: 17670 00:19:15.801 08:14:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:15.801 08:14:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:15.801 08:14:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:15.801 08:14:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:15.801 08:14:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:15.801 08:14:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:15.801 08:14:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81467 00:19:15.801 08:14:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:15.801 08:14:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:15.801 08:14:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:19:16.737 08:14:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:16.737 08:14:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81596 00:19:16.737 08:14:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80910 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:16.737 08:14:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:23.302 08:14:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:23.302 08:14:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:23.302 08:14:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:23.302 08:14:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:23.302 Attaching 4 probes... 00:19:23.302 @path[10.0.0.2, 4420]: 17583 00:19:23.302 @path[10.0.0.2, 4420]: 19188 00:19:23.302 @path[10.0.0.2, 4420]: 19704 00:19:23.302 @path[10.0.0.2, 4420]: 20029 00:19:23.302 @path[10.0.0.2, 4420]: 19829 00:19:23.302 08:14:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:23.302 08:14:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:23.302 08:14:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:23.302 08:14:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:23.302 08:14:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:23.302 08:14:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:23.302 08:14:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81596 00:19:23.302 08:14:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:23.302 08:14:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:23.302 [2024-06-10 08:14:45.072533] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:23.302 08:14:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:23.561 08:14:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:30.126 08:14:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:30.126 08:14:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81765 00:19:30.126 08:14:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80910 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:30.126 08:14:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:36.701 Attaching 4 probes... 00:19:36.701 @path[10.0.0.2, 4421]: 18695 00:19:36.701 @path[10.0.0.2, 4421]: 18989 00:19:36.701 @path[10.0.0.2, 4421]: 18145 00:19:36.701 @path[10.0.0.2, 4421]: 17751 00:19:36.701 @path[10.0.0.2, 4421]: 17838 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81765 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80966 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@949 -- # '[' -z 80966 ']' 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # kill -0 80966 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # uname 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 80966 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:19:36.701 killing process with pid 80966 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # echo 'killing process with pid 80966' 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@968 -- # kill 80966 00:19:36.701 08:14:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@973 -- # wait 80966 00:19:36.701 Connection closed with partial response: 00:19:36.701 00:19:36.701 00:19:36.701 08:14:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80966 00:19:36.701 08:14:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:36.701 [2024-06-10 08:14:00.224982] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:19:36.701 [2024-06-10 08:14:00.225115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80966 ] 00:19:36.701 [2024-06-10 08:14:00.364148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.701 [2024-06-10 08:14:00.501397] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.701 [2024-06-10 08:14:00.562046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:36.701 Running I/O for 90 seconds... 00:19:36.701 [2024-06-10 08:14:10.467035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.701 [2024-06-10 08:14:10.467105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:36.701 [2024-06-10 08:14:10.467179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.701 [2024-06-10 08:14:10.467198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:36.701 [2024-06-10 08:14:10.467221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.701 [2024-06-10 08:14:10.467236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:36.701 [2024-06-10 08:14:10.467257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.701 [2024-06-10 08:14:10.467272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:36.701 [2024-06-10 08:14:10.467293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.701 [2024-06-10 08:14:10.467307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:36.701 [2024-06-10 08:14:10.467328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.701 [2024-06-10 08:14:10.467358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:36.701 [2024-06-10 08:14:10.467378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.701 [2024-06-10 08:14:10.467406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:36.701 [2024-06-10 08:14:10.467425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.701 [2024-06-10 08:14:10.467438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:36.701 [2024-06-10 08:14:10.467457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.701 [2024-06-10 08:14:10.467470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:36.701 [2024-06-10 08:14:10.467490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.701 [2024-06-10 08:14:10.467503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:36.701 [2024-06-10 08:14:10.467521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.701 [2024-06-10 08:14:10.467565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:36.701 [2024-06-10 08:14:10.467586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.701 [2024-06-10 08:14:10.467599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:36.701 [2024-06-10 08:14:10.467618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.701 [2024-06-10 08:14:10.467631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:36.701 [2024-06-10 08:14:10.467650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.701 [2024-06-10 08:14:10.467663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:36.701 [2024-06-10 08:14:10.467682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.702 [2024-06-10 08:14:10.467696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.467714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.702 [2024-06-10 08:14:10.467727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.467746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.702 [2024-06-10 08:14:10.467760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.467778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.702 [2024-06-10 08:14:10.467792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.467810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.702 [2024-06-10 08:14:10.467823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.467856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.702 [2024-06-10 08:14:10.467870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.467890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.702 [2024-06-10 08:14:10.467902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.467922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.702 [2024-06-10 08:14:10.467936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.467955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.702 [2024-06-10 08:14:10.467977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.467998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.702 [2024-06-10 08:14:10.468029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.468962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.702 [2024-06-10 08:14:10.468980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.469003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.702 [2024-06-10 08:14:10.469017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.469038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.702 [2024-06-10 08:14:10.469053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.469075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.702 [2024-06-10 08:14:10.469090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.469111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.702 [2024-06-10 08:14:10.469126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.469147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.702 [2024-06-10 08:14:10.469162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:36.702 [2024-06-10 08:14:10.469184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.469199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.469220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.469234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.469256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.469271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.469300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.469314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.469350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.469379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.469399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.469413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.469432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.469452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.469472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.469487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.469506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.469520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.469539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.469552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.469572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.469586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.469610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.703 [2024-06-10 08:14:10.469625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.469645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.703 [2024-06-10 08:14:10.469659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.469679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.703 [2024-06-10 08:14:10.469692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.469712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.703 [2024-06-10 08:14:10.469741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.469761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.703 [2024-06-10 08:14:10.469775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.469813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.703 [2024-06-10 08:14:10.469834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.469854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.703 [2024-06-10 08:14:10.469882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.469905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.703 [2024-06-10 08:14:10.469930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.469953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.469968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.469988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.470003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.470041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.470056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.470077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.470092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.470114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.470128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.470150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.470164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.470186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.470200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.470222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.470237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.470262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.703 [2024-06-10 08:14:10.470278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.470300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.703 [2024-06-10 08:14:10.470315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.470347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.703 [2024-06-10 08:14:10.470392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.470427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.703 [2024-06-10 08:14:10.470441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.470468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.703 [2024-06-10 08:14:10.470483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.470502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.703 [2024-06-10 08:14:10.470516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.470536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.703 [2024-06-10 08:14:10.470550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.470569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.703 [2024-06-10 08:14:10.470583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.470603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.470616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.470636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.470650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.470670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.470684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.470703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.703 [2024-06-10 08:14:10.470717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:36.703 [2024-06-10 08:14:10.470737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.704 [2024-06-10 08:14:10.470751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.470770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.704 [2024-06-10 08:14:10.470784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.470819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.704 [2024-06-10 08:14:10.470850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.470880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.704 [2024-06-10 08:14:10.470898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.470927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.704 [2024-06-10 08:14:10.470943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.470972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.704 [2024-06-10 08:14:10.470986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.471023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.704 [2024-06-10 08:14:10.471038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.471060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.704 [2024-06-10 08:14:10.471075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.471096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.704 [2024-06-10 08:14:10.471111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.471132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.704 [2024-06-10 08:14:10.471147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.471168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.704 [2024-06-10 08:14:10.471183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.471205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.704 [2024-06-10 08:14:10.471219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.471241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.704 [2024-06-10 08:14:10.471255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.471277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.704 [2024-06-10 08:14:10.471292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.471313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.704 [2024-06-10 08:14:10.471327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.471363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.704 [2024-06-10 08:14:10.471408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.471434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.704 [2024-06-10 08:14:10.471449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.471469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.704 [2024-06-10 08:14:10.471482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.471502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.704 [2024-06-10 08:14:10.471516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.472975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.704 [2024-06-10 08:14:10.473005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.473033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.704 [2024-06-10 08:14:10.473051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.473073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.704 [2024-06-10 08:14:10.473088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.473110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.704 [2024-06-10 08:14:10.473124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.473145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.704 [2024-06-10 08:14:10.473160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.473181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.704 [2024-06-10 08:14:10.473195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.473216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.704 [2024-06-10 08:14:10.473231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.473253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.704 [2024-06-10 08:14:10.473268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.473579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.704 [2024-06-10 08:14:10.473601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.473624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.704 [2024-06-10 08:14:10.473651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.473672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.704 [2024-06-10 08:14:10.473686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.473705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.704 [2024-06-10 08:14:10.473718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.473737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.704 [2024-06-10 08:14:10.473750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.473769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.704 [2024-06-10 08:14:10.473782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.473816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.704 [2024-06-10 08:14:10.473829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.473849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.704 [2024-06-10 08:14:10.473876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:10.473901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.704 [2024-06-10 08:14:10.473917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:17.015597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.704 [2024-06-10 08:14:17.015693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:17.015757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.704 [2024-06-10 08:14:17.015791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:17.015818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.704 [2024-06-10 08:14:17.015834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:36.704 [2024-06-10 08:14:17.015856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.704 [2024-06-10 08:14:17.015871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.015892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.015937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.015961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.015976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.015998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.016012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.016033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.016048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.016069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.016084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.016105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.016119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.016140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.016155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.016176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.016191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.016212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.016226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.016248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.016262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.016283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.016297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.016319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.016333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.016354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.705 [2024-06-10 08:14:17.016369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.016400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.705 [2024-06-10 08:14:17.016417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.016438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.705 [2024-06-10 08:14:17.016453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.016475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.705 [2024-06-10 08:14:17.016490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.016511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.705 [2024-06-10 08:14:17.016526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.016548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.705 [2024-06-10 08:14:17.016562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.016584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.705 [2024-06-10 08:14:17.016598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.016620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.705 [2024-06-10 08:14:17.016635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.016837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.016862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.016886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.016901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.016935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.016953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.016975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.016990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.017012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.017026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.017059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.017075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.017097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.017111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.017133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.017147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.017169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.017184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.017207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.705 [2024-06-10 08:14:17.017222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:36.705 [2024-06-10 08:14:17.017244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.706 [2024-06-10 08:14:17.017259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.017281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.706 [2024-06-10 08:14:17.017296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.017317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.706 [2024-06-10 08:14:17.017332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.017354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.706 [2024-06-10 08:14:17.017369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.017390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.706 [2024-06-10 08:14:17.017404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.017426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.706 [2024-06-10 08:14:17.017440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.017462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.017476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.017498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.017520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.017543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.017558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.017580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.017596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.017618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.017632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.017655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.017669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.017691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.017706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.017727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.017742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.017764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.017778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.017815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.017831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.017853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.017868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.017889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.017904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.017925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.017940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.017961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.017983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.018006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.018021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.018043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.018058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.018085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.706 [2024-06-10 08:14:17.018101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.018123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.706 [2024-06-10 08:14:17.018138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.018160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.706 [2024-06-10 08:14:17.018174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.018196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.706 [2024-06-10 08:14:17.018211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.018233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.706 [2024-06-10 08:14:17.018248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.018270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.706 [2024-06-10 08:14:17.018285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.018306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.706 [2024-06-10 08:14:17.018321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.018343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.706 [2024-06-10 08:14:17.018357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.018379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.018393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.018417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.018432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.018461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.018476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.018498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.018513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.018534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.018549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.018571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.018586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.018607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.018621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.018643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.018658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:36.706 [2024-06-10 08:14:17.018679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.706 [2024-06-10 08:14:17.018694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.018715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.707 [2024-06-10 08:14:17.018730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.018752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.707 [2024-06-10 08:14:17.018766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.018803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.707 [2024-06-10 08:14:17.018822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.018845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.707 [2024-06-10 08:14:17.018861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.018882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.707 [2024-06-10 08:14:17.018897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.707 [2024-06-10 08:14:17.019026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.707 [2024-06-10 08:14:17.019063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.019106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.019144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.019181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.019216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.019253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.019289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.019325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.019362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.019398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.019434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.019478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.019517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.019553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.019590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.019626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.019663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.707 [2024-06-10 08:14:17.019699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.707 [2024-06-10 08:14:17.019737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.707 [2024-06-10 08:14:17.019773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.707 [2024-06-10 08:14:17.019826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.707 [2024-06-10 08:14:17.019864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.707 [2024-06-10 08:14:17.019900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.019922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.707 [2024-06-10 08:14:17.019944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.020800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.707 [2024-06-10 08:14:17.020829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.020864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.020881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.020922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.020940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.020971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.020987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.021017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.021042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.021073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.021088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.021118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.021132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.021163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.021178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.021223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.707 [2024-06-10 08:14:17.021253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:36.707 [2024-06-10 08:14:17.021283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:17.021299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:17.021329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:17.021344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:17.021374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:17.021388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:17.021430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:17.021446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:17.021475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:17.021490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:17.021520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:17.021535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:17.021564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:17.021580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:17.021613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:17.021642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:17.021673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:17.021689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:17.021718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:17.021733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:17.021763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:17.021778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:17.021824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:17.021845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.080378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:24.080461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.080523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:24.080544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.080567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:24.080582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.080638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:24.080654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.080675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:24.080690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.080711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:24.080725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.080746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:24.080761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.080798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:24.080816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.080839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.708 [2024-06-10 08:14:24.080853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.080874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.708 [2024-06-10 08:14:24.080889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.080910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.708 [2024-06-10 08:14:24.080924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.080958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.708 [2024-06-10 08:14:24.080973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.080995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.708 [2024-06-10 08:14:24.081009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.081030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.708 [2024-06-10 08:14:24.081044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.081065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.708 [2024-06-10 08:14:24.081089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.081110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.708 [2024-06-10 08:14:24.081135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.081158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.708 [2024-06-10 08:14:24.081173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.081198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.708 [2024-06-10 08:14:24.081213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.081235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.708 [2024-06-10 08:14:24.081249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.081271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.708 [2024-06-10 08:14:24.081285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.081307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.708 [2024-06-10 08:14:24.081322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.081343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.708 [2024-06-10 08:14:24.081358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.081379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.708 [2024-06-10 08:14:24.081394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.081416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.708 [2024-06-10 08:14:24.081430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.081456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:24.081472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.081493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:24.081508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.081530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.708 [2024-06-10 08:14:24.081545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:36.708 [2024-06-10 08:14:24.081566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.709 [2024-06-10 08:14:24.081588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.081611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.709 [2024-06-10 08:14:24.081626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.081647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.709 [2024-06-10 08:14:24.081662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.081683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.709 [2024-06-10 08:14:24.081697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.081719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.709 [2024-06-10 08:14:24.081733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.081754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.709 [2024-06-10 08:14:24.081769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.081805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.709 [2024-06-10 08:14:24.081824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.081846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.709 [2024-06-10 08:14:24.081861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.081882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.709 [2024-06-10 08:14:24.081897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.081918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.709 [2024-06-10 08:14:24.081933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.081954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.709 [2024-06-10 08:14:24.081969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.081990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.709 [2024-06-10 08:14:24.082004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.709 [2024-06-10 08:14:24.082048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.709 [2024-06-10 08:14:24.082086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.709 [2024-06-10 08:14:24.082123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.709 [2024-06-10 08:14:24.082158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.709 [2024-06-10 08:14:24.082194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.709 [2024-06-10 08:14:24.082230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.709 [2024-06-10 08:14:24.082266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.709 [2024-06-10 08:14:24.082302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.709 [2024-06-10 08:14:24.082338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.709 [2024-06-10 08:14:24.082379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.709 [2024-06-10 08:14:24.082417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.709 [2024-06-10 08:14:24.082453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.709 [2024-06-10 08:14:24.082489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.709 [2024-06-10 08:14:24.082534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.709 [2024-06-10 08:14:24.082569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.709 [2024-06-10 08:14:24.082606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.709 [2024-06-10 08:14:24.082642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.709 [2024-06-10 08:14:24.082677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.709 [2024-06-10 08:14:24.082713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.709 [2024-06-10 08:14:24.082748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.709 [2024-06-10 08:14:24.082795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.709 [2024-06-10 08:14:24.082835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.709 [2024-06-10 08:14:24.082870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.709 [2024-06-10 08:14:24.082906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.709 [2024-06-10 08:14:24.082941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.082970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.709 [2024-06-10 08:14:24.082986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.083009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.709 [2024-06-10 08:14:24.083023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:36.709 [2024-06-10 08:14:24.083045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.710 [2024-06-10 08:14:24.083059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.710 [2024-06-10 08:14:24.083095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.710 [2024-06-10 08:14:24.083131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.710 [2024-06-10 08:14:24.083166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.710 [2024-06-10 08:14:24.083202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.710 [2024-06-10 08:14:24.083238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.710 [2024-06-10 08:14:24.083273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.710 [2024-06-10 08:14:24.083308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.710 [2024-06-10 08:14:24.083344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.710 [2024-06-10 08:14:24.083381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.710 [2024-06-10 08:14:24.083423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.710 [2024-06-10 08:14:24.083460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.710 [2024-06-10 08:14:24.083497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.710 [2024-06-10 08:14:24.083533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.710 [2024-06-10 08:14:24.083574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.710 [2024-06-10 08:14:24.083610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.710 [2024-06-10 08:14:24.083646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.710 [2024-06-10 08:14:24.083681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.710 [2024-06-10 08:14:24.083717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.710 [2024-06-10 08:14:24.083752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.710 [2024-06-10 08:14:24.083802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.710 [2024-06-10 08:14:24.083839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.710 [2024-06-10 08:14:24.083883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.710 [2024-06-10 08:14:24.083921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.710 [2024-06-10 08:14:24.083957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.083978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.710 [2024-06-10 08:14:24.083993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.084015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.710 [2024-06-10 08:14:24.084029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.084050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.710 [2024-06-10 08:14:24.084065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.084086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.710 [2024-06-10 08:14:24.084101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.084122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.710 [2024-06-10 08:14:24.084136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.084157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.710 [2024-06-10 08:14:24.084172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.084204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.710 [2024-06-10 08:14:24.084219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.084241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.710 [2024-06-10 08:14:24.084255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:36.710 [2024-06-10 08:14:24.084277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:24.084291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.084312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:24.084333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.084355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:24.084370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.084391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:24.084406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.084427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:24.084441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.084463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:24.084477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.084498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:24.084512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.084533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:24.084548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.084569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:24.084584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.084606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:24.084620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.084642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:24.084656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.084678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:24.084692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.085382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:24.085409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.085444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.711 [2024-06-10 08:14:24.085469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.085517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.711 [2024-06-10 08:14:24.085535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.085565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.711 [2024-06-10 08:14:24.085581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.085610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.711 [2024-06-10 08:14:24.085625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.085655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.711 [2024-06-10 08:14:24.085671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.085700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.711 [2024-06-10 08:14:24.085715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.085746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.711 [2024-06-10 08:14:24.085761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.085821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.711 [2024-06-10 08:14:24.085843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.085875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:24.085891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.085921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:24.085936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.085966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:24.085981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.086011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:24.086026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.086057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:24.086072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.086112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:24.086128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.086158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:24.086173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:24.086203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:24.086219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:37.526063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.711 [2024-06-10 08:14:37.526173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:37.526249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.711 [2024-06-10 08:14:37.526270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:37.526294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:37384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.711 [2024-06-10 08:14:37.526310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:37.526332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.711 [2024-06-10 08:14:37.526348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:37.526369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.711 [2024-06-10 08:14:37.526392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:37.526414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.711 [2024-06-10 08:14:37.526429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:37.526451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.711 [2024-06-10 08:14:37.526467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:37.526488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.711 [2024-06-10 08:14:37.526503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:37.526525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:37.526542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:37.526563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.711 [2024-06-10 08:14:37.526609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:36.711 [2024-06-10 08:14:37.526632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.526648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.526669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.526683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.526704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.526718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.526739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.526754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.526775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.526810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.526834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.526850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.526871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.526886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.526912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.526927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.526948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.526963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.526985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.527000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.527036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.527083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.527121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.527158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.712 [2024-06-10 08:14:37.527264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.712 [2024-06-10 08:14:37.527296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:37448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.712 [2024-06-10 08:14:37.527326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.712 [2024-06-10 08:14:37.527355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.712 [2024-06-10 08:14:37.527384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.712 [2024-06-10 08:14:37.527413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.712 [2024-06-10 08:14:37.527443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.712 [2024-06-10 08:14:37.527472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.712 [2024-06-10 08:14:37.527501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.712 [2024-06-10 08:14:37.527531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.712 [2024-06-10 08:14:37.527573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.712 [2024-06-10 08:14:37.527603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.712 [2024-06-10 08:14:37.527632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.712 [2024-06-10 08:14:37.527661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.712 [2024-06-10 08:14:37.527691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.712 [2024-06-10 08:14:37.527720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.527750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.527809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.527842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.527872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.527901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.527930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.527971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.527988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.712 [2024-06-10 08:14:37.528002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.528018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.712 [2024-06-10 08:14:37.528031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.528048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.712 [2024-06-10 08:14:37.528062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-06-10 08:14:37.528077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.528091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.528120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.528149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.528178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.528207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.528237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.528267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.528296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.528325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.528360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.528389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.528419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.528448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.528477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.713 [2024-06-10 08:14:37.528506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.713 [2024-06-10 08:14:37.528536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:37128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.713 [2024-06-10 08:14:37.528566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.713 [2024-06-10 08:14:37.528595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.713 [2024-06-10 08:14:37.528625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.713 [2024-06-10 08:14:37.528654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.713 [2024-06-10 08:14:37.528683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.713 [2024-06-10 08:14:37.528712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.713 [2024-06-10 08:14:37.528749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.713 [2024-06-10 08:14:37.528803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.713 [2024-06-10 08:14:37.528836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.713 [2024-06-10 08:14:37.528865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.713 [2024-06-10 08:14:37.528895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.713 [2024-06-10 08:14:37.528925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:37224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.713 [2024-06-10 08:14:37.528963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.528981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.713 [2024-06-10 08:14:37.528995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.529011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.529024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.529049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.529064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.529079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.529093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.529108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.529122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.529137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.529158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.529174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.529188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.529208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.529221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.529237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.529260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.529277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.529290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.529305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.713 [2024-06-10 08:14:37.529319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.713 [2024-06-10 08:14:37.529334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.714 [2024-06-10 08:14:37.529348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.529363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.714 [2024-06-10 08:14:37.529377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.529392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.714 [2024-06-10 08:14:37.529406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.529421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.714 [2024-06-10 08:14:37.529434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.529449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.714 [2024-06-10 08:14:37.529463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.529479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.714 [2024-06-10 08:14:37.529492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.529507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.714 [2024-06-10 08:14:37.529521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.529547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.714 [2024-06-10 08:14:37.529563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.529579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.714 [2024-06-10 08:14:37.529592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.529608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.714 [2024-06-10 08:14:37.529621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.529637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.714 [2024-06-10 08:14:37.529651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.529666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.714 [2024-06-10 08:14:37.529680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.529695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.714 [2024-06-10 08:14:37.529709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.529724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.714 [2024-06-10 08:14:37.529744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.529759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.714 [2024-06-10 08:14:37.529773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.529801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.714 [2024-06-10 08:14:37.529816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.529832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.714 [2024-06-10 08:14:37.529845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.529861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.714 [2024-06-10 08:14:37.529874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.529889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.714 [2024-06-10 08:14:37.529903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.529918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.714 [2024-06-10 08:14:37.529932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.529954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.714 [2024-06-10 08:14:37.529969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.529983] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1100 is same with the state(5) to be set 00:19:36.714 [2024-06-10 08:14:37.530001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:36.714 [2024-06-10 08:14:37.530012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:36.714 [2024-06-10 08:14:37.530023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37360 len:8 PRP1 0x0 PRP2 0x0 00:19:36.714 [2024-06-10 08:14:37.530038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.530053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:36.714 [2024-06-10 08:14:37.530063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:36.714 [2024-06-10 08:14:37.530073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37816 len:8 PRP1 0x0 PRP2 0x0 00:19:36.714 [2024-06-10 08:14:37.530085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.530098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:36.714 [2024-06-10 08:14:37.530108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:36.714 [2024-06-10 08:14:37.530118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37824 len:8 PRP1 0x0 PRP2 0x0 00:19:36.714 [2024-06-10 08:14:37.530131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.530144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:36.714 [2024-06-10 08:14:37.530153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:36.714 [2024-06-10 08:14:37.530163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37832 len:8 PRP1 0x0 PRP2 0x0 00:19:36.714 [2024-06-10 08:14:37.530176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.530195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:36.714 [2024-06-10 08:14:37.530205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:36.714 [2024-06-10 08:14:37.530215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37840 len:8 PRP1 0x0 PRP2 0x0 00:19:36.714 [2024-06-10 08:14:37.530228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.530242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:36.714 [2024-06-10 08:14:37.530251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:36.714 [2024-06-10 08:14:37.530261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37848 len:8 PRP1 0x0 PRP2 0x0 00:19:36.714 [2024-06-10 08:14:37.530274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.530287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:36.714 [2024-06-10 08:14:37.530297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:36.714 [2024-06-10 08:14:37.530307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37856 len:8 PRP1 0x0 PRP2 0x0 00:19:36.714 [2024-06-10 08:14:37.530326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.714 [2024-06-10 08:14:37.530341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:36.714 [2024-06-10 08:14:37.530350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:36.715 [2024-06-10 08:14:37.530361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37864 len:8 PRP1 0x0 PRP2 0x0 00:19:36.715 [2024-06-10 08:14:37.530374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.715 [2024-06-10 08:14:37.530387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:36.715 [2024-06-10 08:14:37.530396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:36.715 [2024-06-10 08:14:37.530406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37872 len:8 PRP1 0x0 PRP2 0x0 00:19:36.715 [2024-06-10 08:14:37.530420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.715 [2024-06-10 08:14:37.530434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:36.715 [2024-06-10 08:14:37.530443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:36.715 [2024-06-10 08:14:37.530453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37880 len:8 PRP1 0x0 PRP2 0x0 00:19:36.715 [2024-06-10 08:14:37.530466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.715 [2024-06-10 08:14:37.530479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:36.715 [2024-06-10 08:14:37.530490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:36.715 [2024-06-10 08:14:37.530500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37888 len:8 PRP1 0x0 PRP2 0x0 00:19:36.715 [2024-06-10 08:14:37.530512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.715 [2024-06-10 08:14:37.530526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:36.715 [2024-06-10 08:14:37.530535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:36.715 [2024-06-10 08:14:37.530545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37896 len:8 PRP1 0x0 PRP2 0x0 00:19:36.715 [2024-06-10 08:14:37.530558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.715 [2024-06-10 08:14:37.530577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:36.715 [2024-06-10 08:14:37.530587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:36.715 [2024-06-10 08:14:37.530597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37904 len:8 PRP1 0x0 PRP2 0x0 00:19:36.715 [2024-06-10 08:14:37.530610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.715 [2024-06-10 08:14:37.530624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:36.715 [2024-06-10 08:14:37.530633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:36.715 [2024-06-10 08:14:37.530643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37912 len:8 PRP1 0x0 PRP2 0x0 00:19:36.715 [2024-06-10 08:14:37.530656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.715 [2024-06-10 08:14:37.530669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:36.715 [2024-06-10 08:14:37.530679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:36.715 [2024-06-10 08:14:37.530694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37920 len:8 PRP1 0x0 PRP2 0x0 00:19:36.715 [2024-06-10 08:14:37.530708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.715 [2024-06-10 08:14:37.530722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:36.715 [2024-06-10 08:14:37.530731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:36.715 [2024-06-10 08:14:37.530742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37928 len:8 PRP1 0x0 PRP2 0x0 00:19:36.715 [2024-06-10 08:14:37.530754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.715 [2024-06-10 08:14:37.530767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:36.715 [2024-06-10 08:14:37.530777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:36.715 [2024-06-10 08:14:37.530800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37936 len:8 PRP1 0x0 PRP2 0x0 00:19:36.715 [2024-06-10 08:14:37.530816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.715 [2024-06-10 08:14:37.530883] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11a1100 was disconnected and freed. reset controller. 00:19:36.715 [2024-06-10 08:14:37.532107] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:36.715 [2024-06-10 08:14:37.532211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.715 [2024-06-10 08:14:37.532234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.715 [2024-06-10 08:14:37.532270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a1630 (9): Bad file descriptor 00:19:36.715 [2024-06-10 08:14:37.532725] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:36.715 [2024-06-10 08:14:37.532757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a1630 with addr=10.0.0.2, port=4421 00:19:36.715 [2024-06-10 08:14:37.532774] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1630 is same with the state(5) to be set 00:19:36.715 [2024-06-10 08:14:37.532876] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a1630 (9): Bad file descriptor 00:19:36.715 [2024-06-10 08:14:37.532911] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:36.715 [2024-06-10 08:14:37.532927] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:36.715 [2024-06-10 08:14:37.532941] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:36.715 [2024-06-10 08:14:37.532991] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:36.715 [2024-06-10 08:14:37.533009] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:36.715 [2024-06-10 08:14:47.596133] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:36.715 Received shutdown signal, test time was about 55.439239 seconds 00:19:36.715 00:19:36.715 Latency(us) 00:19:36.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.715 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:36.715 Verification LBA range: start 0x0 length 0x4000 00:19:36.715 Nvme0n1 : 55.44 7583.14 29.62 0.00 0.00 16846.53 233.66 7015926.69 00:19:36.715 =================================================================================================================== 00:19:36.715 Total : 7583.14 29.62 0.00 0.00 16846.53 233.66 7015926.69 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:36.715 rmmod nvme_tcp 00:19:36.715 rmmod nvme_fabrics 00:19:36.715 rmmod nvme_keyring 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 80910 ']' 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 80910 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@949 -- # '[' -z 80910 ']' 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # kill -0 80910 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # uname 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 80910 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:36.715 killing process with pid 80910 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # echo 'killing process with pid 80910' 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@968 -- # kill 80910 00:19:36.715 08:14:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@973 -- # wait 80910 00:19:36.975 08:14:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:36.975 08:14:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:36.975 08:14:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:36.975 08:14:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:36.975 08:14:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:36.975 08:14:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.975 08:14:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:36.975 08:14:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.975 08:14:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:36.975 00:19:36.975 real 1m1.634s 00:19:36.975 user 2m49.684s 00:19:36.975 sys 0m19.657s 00:19:36.975 08:14:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:36.975 08:14:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:36.975 ************************************ 00:19:36.975 END TEST nvmf_host_multipath 00:19:36.975 ************************************ 00:19:36.975 08:14:58 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:36.975 08:14:58 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:36.975 08:14:58 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:36.975 08:14:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:36.975 ************************************ 00:19:36.975 START TEST nvmf_timeout 00:19:36.975 ************************************ 00:19:36.975 08:14:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:37.235 * Looking for test storage... 00:19:37.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:37.235 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:37.236 Cannot find device "nvmf_tgt_br" 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:37.236 Cannot find device "nvmf_tgt_br2" 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:37.236 Cannot find device "nvmf_tgt_br" 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:37.236 Cannot find device "nvmf_tgt_br2" 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:19:37.236 08:14:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:37.236 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:37.236 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:37.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:37.236 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:37.236 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:37.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:37.236 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:37.236 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:37.236 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:37.236 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:37.236 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:37.236 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:37.236 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:37.495 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:37.495 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:37.495 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:37.495 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:37.495 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:37.495 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:37.495 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:37.495 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:37.495 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:37.495 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:37.495 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:37.495 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:37.495 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:37.495 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:37.495 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:37.495 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:37.495 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:37.495 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:37.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:19:37.495 00:19:37.495 --- 10.0.0.2 ping statistics --- 00:19:37.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.495 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:37.495 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:37.495 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:37.495 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:19:37.495 00:19:37.495 --- 10.0.0.3 ping statistics --- 00:19:37.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.495 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:19:37.495 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:37.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:19:37.495 00:19:37.495 --- 10.0.0.1 ping statistics --- 00:19:37.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.496 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=82077 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 82077 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@830 -- # '[' -z 82077 ']' 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:37.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:37.496 08:14:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:37.496 [2024-06-10 08:14:59.322677] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:19:37.496 [2024-06-10 08:14:59.322774] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.755 [2024-06-10 08:14:59.463242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:37.755 [2024-06-10 08:14:59.568379] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.755 [2024-06-10 08:14:59.568434] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.755 [2024-06-10 08:14:59.568461] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.755 [2024-06-10 08:14:59.568484] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.755 [2024-06-10 08:14:59.568492] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.755 [2024-06-10 08:14:59.568680] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.755 [2024-06-10 08:14:59.568689] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.013 [2024-06-10 08:14:59.628145] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:38.610 08:15:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:38.610 08:15:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@863 -- # return 0 00:19:38.610 08:15:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:38.610 08:15:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:38.610 08:15:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:38.610 08:15:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.610 08:15:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:38.610 08:15:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:38.868 [2024-06-10 08:15:00.608600] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.868 08:15:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:39.127 Malloc0 00:19:39.127 08:15:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:39.385 08:15:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:39.643 08:15:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:39.902 [2024-06-10 08:15:01.618242] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.902 08:15:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82126 00:19:39.902 08:15:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:39.902 08:15:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82126 /var/tmp/bdevperf.sock 00:19:39.902 08:15:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@830 -- # '[' -z 82126 ']' 00:19:39.902 08:15:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.902 08:15:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:39.902 08:15:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.902 08:15:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:39.902 08:15:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:39.902 [2024-06-10 08:15:01.690840] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:19:39.902 [2024-06-10 08:15:01.690947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82126 ] 00:19:40.161 [2024-06-10 08:15:01.831562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.161 [2024-06-10 08:15:01.950375] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.161 [2024-06-10 08:15:02.012104] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:41.100 08:15:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:41.100 08:15:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@863 -- # return 0 00:19:41.100 08:15:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:41.100 08:15:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:41.665 NVMe0n1 00:19:41.665 08:15:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82150 00:19:41.665 08:15:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:41.665 08:15:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:41.665 Running I/O for 10 seconds... 00:19:42.599 08:15:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:42.860 [2024-06-10 08:15:04.513591] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1de0 is same with the state(5) to be set 00:19:42.860 [2024-06-10 08:15:04.513666] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1de0 is same with the state(5) to be set 00:19:42.860 [2024-06-10 08:15:04.513967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.860 [2024-06-10 08:15:04.514013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.860 [2024-06-10 08:15:04.514046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.860 [2024-06-10 08:15:04.514066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.860 [2024-06-10 08:15:04.514087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.860 [2024-06-10 08:15:04.514129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.860 [2024-06-10 08:15:04.514163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.860 [2024-06-10 08:15:04.514184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.860 [2024-06-10 08:15:04.514204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.860 [2024-06-10 08:15:04.514233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.860 [2024-06-10 08:15:04.514253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.860 [2024-06-10 08:15:04.514273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.860 [2024-06-10 08:15:04.514293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.860 [2024-06-10 08:15:04.514312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.860 [2024-06-10 08:15:04.514333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.860 [2024-06-10 08:15:04.514354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.860 [2024-06-10 08:15:04.514373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.860 [2024-06-10 08:15:04.514396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.860 [2024-06-10 08:15:04.514417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.860 [2024-06-10 08:15:04.514437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.860 [2024-06-10 08:15:04.514457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.860 [2024-06-10 08:15:04.514477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.860 [2024-06-10 08:15:04.514497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.860 [2024-06-10 08:15:04.514508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.860 [2024-06-10 08:15:04.514517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.514538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.514557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.514577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.514597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.861 [2024-06-10 08:15:04.514616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.861 [2024-06-10 08:15:04.514636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.861 [2024-06-10 08:15:04.514656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.861 [2024-06-10 08:15:04.514675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.861 [2024-06-10 08:15:04.514696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.861 [2024-06-10 08:15:04.514717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.861 [2024-06-10 08:15:04.514736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.861 [2024-06-10 08:15:04.514757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.514776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.514796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.514837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.514860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.514881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.514901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.514922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.514943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.514963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.514983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.514994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.515004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.515015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.515024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.515034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.515044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.515055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.515064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.515076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.515085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.515096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.515108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.515126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.515138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.515149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.515159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.515170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.515179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.515189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.515199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.515210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.515219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.515241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.515251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.515262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.515271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.515282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.861 [2024-06-10 08:15:04.515291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.515302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.861 [2024-06-10 08:15:04.515313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.861 [2024-06-10 08:15:04.515323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.861 [2024-06-10 08:15:04.515332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.862 [2024-06-10 08:15:04.515352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.862 [2024-06-10 08:15:04.515376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.862 [2024-06-10 08:15:04.515397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.862 [2024-06-10 08:15:04.515417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.862 [2024-06-10 08:15:04.515437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.862 [2024-06-10 08:15:04.515457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.862 [2024-06-10 08:15:04.515477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.862 [2024-06-10 08:15:04.515497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.862 [2024-06-10 08:15:04.515517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:87080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.862 [2024-06-10 08:15:04.515537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.862 [2024-06-10 08:15:04.515558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.862 [2024-06-10 08:15:04.515578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.862 [2024-06-10 08:15:04.515599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.862 [2024-06-10 08:15:04.515618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.862 [2024-06-10 08:15:04.515638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.862 [2024-06-10 08:15:04.515658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.862 [2024-06-10 08:15:04.515677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.862 [2024-06-10 08:15:04.515697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.862 [2024-06-10 08:15:04.515718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.862 [2024-06-10 08:15:04.515739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.862 [2024-06-10 08:15:04.515759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.862 [2024-06-10 08:15:04.515789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.862 [2024-06-10 08:15:04.515813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.862 [2024-06-10 08:15:04.515835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.862 [2024-06-10 08:15:04.515855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.862 [2024-06-10 08:15:04.515875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.862 [2024-06-10 08:15:04.515896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.862 [2024-06-10 08:15:04.515916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.862 [2024-06-10 08:15:04.515936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.862 [2024-06-10 08:15:04.515956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.862 [2024-06-10 08:15:04.515977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.515988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.862 [2024-06-10 08:15:04.515997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.516007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.862 [2024-06-10 08:15:04.516016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.516027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.862 [2024-06-10 08:15:04.516036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.516046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.862 [2024-06-10 08:15:04.516055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.516075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.862 [2024-06-10 08:15:04.516084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.862 [2024-06-10 08:15:04.516095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.862 [2024-06-10 08:15:04.516107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.863 [2024-06-10 08:15:04.516138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.863 [2024-06-10 08:15:04.516160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.863 [2024-06-10 08:15:04.516180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.863 [2024-06-10 08:15:04.516201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.863 [2024-06-10 08:15:04.516222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.863 [2024-06-10 08:15:04.516242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.863 [2024-06-10 08:15:04.516263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.863 [2024-06-10 08:15:04.516283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.863 [2024-06-10 08:15:04.516303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.863 [2024-06-10 08:15:04.516323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.863 [2024-06-10 08:15:04.516343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.863 [2024-06-10 08:15:04.516363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.863 [2024-06-10 08:15:04.516383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516393] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbbcf0 is same with the state(5) to be set 00:19:42.863 [2024-06-10 08:15:04.516411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.863 [2024-06-10 08:15:04.516424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.863 [2024-06-10 08:15:04.516432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87240 len:8 PRP1 0x0 PRP2 0x0 00:19:42.863 [2024-06-10 08:15:04.516441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.863 [2024-06-10 08:15:04.516459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.863 [2024-06-10 08:15:04.516466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87608 len:8 PRP1 0x0 PRP2 0x0 00:19:42.863 [2024-06-10 08:15:04.516475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.863 [2024-06-10 08:15:04.516492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.863 [2024-06-10 08:15:04.516500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87616 len:8 PRP1 0x0 PRP2 0x0 00:19:42.863 [2024-06-10 08:15:04.516508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.863 [2024-06-10 08:15:04.516525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.863 [2024-06-10 08:15:04.516533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87624 len:8 PRP1 0x0 PRP2 0x0 00:19:42.863 [2024-06-10 08:15:04.516550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.863 [2024-06-10 08:15:04.516567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.863 [2024-06-10 08:15:04.516575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87632 len:8 PRP1 0x0 PRP2 0x0 00:19:42.863 [2024-06-10 08:15:04.516583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.863 [2024-06-10 08:15:04.516600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.863 [2024-06-10 08:15:04.516607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87640 len:8 PRP1 0x0 PRP2 0x0 00:19:42.863 [2024-06-10 08:15:04.516616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.863 [2024-06-10 08:15:04.516632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.863 [2024-06-10 08:15:04.516640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87648 len:8 PRP1 0x0 PRP2 0x0 00:19:42.863 [2024-06-10 08:15:04.516649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.863 [2024-06-10 08:15:04.516665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.863 [2024-06-10 08:15:04.516672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87656 len:8 PRP1 0x0 PRP2 0x0 00:19:42.863 [2024-06-10 08:15:04.516681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.863 [2024-06-10 08:15:04.516701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.863 [2024-06-10 08:15:04.516709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87664 len:8 PRP1 0x0 PRP2 0x0 00:19:42.863 [2024-06-10 08:15:04.516718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.863 [2024-06-10 08:15:04.516734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.863 [2024-06-10 08:15:04.516742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87672 len:8 PRP1 0x0 PRP2 0x0 00:19:42.863 [2024-06-10 08:15:04.516750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.863 [2024-06-10 08:15:04.516766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.863 [2024-06-10 08:15:04.516774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87680 len:8 PRP1 0x0 PRP2 0x0 00:19:42.863 [2024-06-10 08:15:04.516796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.863 [2024-06-10 08:15:04.516814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.863 [2024-06-10 08:15:04.516822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87688 len:8 PRP1 0x0 PRP2 0x0 00:19:42.863 [2024-06-10 08:15:04.516845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.863 [2024-06-10 08:15:04.516856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.863 [2024-06-10 08:15:04.516863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.864 [2024-06-10 08:15:04.516871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87696 len:8 PRP1 0x0 PRP2 0x0 00:19:42.864 [2024-06-10 08:15:04.516880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.864 [2024-06-10 08:15:04.516889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.864 [2024-06-10 08:15:04.516897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.864 [2024-06-10 08:15:04.516904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87704 len:8 PRP1 0x0 PRP2 0x0 00:19:42.864 [2024-06-10 08:15:04.516913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.864 [2024-06-10 08:15:04.516922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.864 [2024-06-10 08:15:04.516929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.864 [2024-06-10 08:15:04.516937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87712 len:8 PRP1 0x0 PRP2 0x0 00:19:42.864 [2024-06-10 08:15:04.516945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.864 [2024-06-10 08:15:04.516954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.864 [2024-06-10 08:15:04.516962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.864 [2024-06-10 08:15:04.516969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87720 len:8 PRP1 0x0 PRP2 0x0 00:19:42.864 [2024-06-10 08:15:04.516978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.864 [2024-06-10 08:15:04.516987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.864 [2024-06-10 08:15:04.517000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.864 [2024-06-10 08:15:04.517020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87728 len:8 PRP1 0x0 PRP2 0x0 00:19:42.864 [2024-06-10 08:15:04.517030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.864 [2024-06-10 08:15:04.517084] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bbbcf0 was disconnected and freed. reset controller. 00:19:42.864 [2024-06-10 08:15:04.517340] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:42.864 [2024-06-10 08:15:04.517421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4a710 (9): Bad file descriptor 00:19:42.864 [2024-06-10 08:15:04.517541] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.864 [2024-06-10 08:15:04.517574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b4a710 with addr=10.0.0.2, port=4420 00:19:42.864 [2024-06-10 08:15:04.517587] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4a710 is same with the state(5) to be set 00:19:42.864 [2024-06-10 08:15:04.517604] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4a710 (9): Bad file descriptor 00:19:42.864 [2024-06-10 08:15:04.517621] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:42.864 [2024-06-10 08:15:04.517631] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:42.864 [2024-06-10 08:15:04.517640] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:42.864 [2024-06-10 08:15:04.517660] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.864 [2024-06-10 08:15:04.517677] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:42.864 08:15:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:44.764 [2024-06-10 08:15:06.518030] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:44.764 [2024-06-10 08:15:06.518133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b4a710 with addr=10.0.0.2, port=4420 00:19:44.764 [2024-06-10 08:15:06.518173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4a710 is same with the state(5) to be set 00:19:44.764 [2024-06-10 08:15:06.518209] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4a710 (9): Bad file descriptor 00:19:44.764 [2024-06-10 08:15:06.518261] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:44.764 [2024-06-10 08:15:06.518283] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:44.764 [2024-06-10 08:15:06.518301] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:44.764 [2024-06-10 08:15:06.518339] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:44.764 [2024-06-10 08:15:06.518354] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:44.764 08:15:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:44.764 08:15:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:44.764 08:15:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:45.022 08:15:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:45.022 08:15:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:45.022 08:15:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:45.022 08:15:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:45.281 08:15:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:45.281 08:15:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:46.657 [2024-06-10 08:15:08.518609] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:46.657 [2024-06-10 08:15:08.518698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b4a710 with addr=10.0.0.2, port=4420 00:19:46.657 [2024-06-10 08:15:08.518715] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4a710 is same with the state(5) to be set 00:19:46.657 [2024-06-10 08:15:08.518740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4a710 (9): Bad file descriptor 00:19:46.657 [2024-06-10 08:15:08.518759] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:46.657 [2024-06-10 08:15:08.518768] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:46.657 [2024-06-10 08:15:08.518779] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:46.657 [2024-06-10 08:15:08.518832] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:46.657 [2024-06-10 08:15:08.518846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:49.187 [2024-06-10 08:15:10.518939] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:49.754 00:19:49.754 Latency(us) 00:19:49.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.754 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:49.754 Verification LBA range: start 0x0 length 0x4000 00:19:49.754 NVMe0n1 : 8.17 1327.32 5.18 15.67 0.00 95157.42 3932.16 7015926.69 00:19:49.754 =================================================================================================================== 00:19:49.754 Total : 1327.32 5.18 15.67 0.00 95157.42 3932.16 7015926.69 00:19:49.754 0 00:19:50.320 08:15:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:50.320 08:15:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:50.320 08:15:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:50.579 08:15:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:50.579 08:15:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:50.579 08:15:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:50.579 08:15:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:50.837 08:15:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:50.837 08:15:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 82150 00:19:50.837 08:15:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82126 00:19:50.837 08:15:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@949 -- # '[' -z 82126 ']' 00:19:50.837 08:15:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # kill -0 82126 00:19:50.837 08:15:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # uname 00:19:50.837 08:15:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:50.837 08:15:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 82126 00:19:50.837 08:15:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:19:50.837 08:15:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:19:50.837 killing process with pid 82126 00:19:50.837 08:15:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # echo 'killing process with pid 82126' 00:19:50.837 Received shutdown signal, test time was about 9.283289 seconds 00:19:50.837 00:19:50.837 Latency(us) 00:19:50.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.837 =================================================================================================================== 00:19:50.837 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.837 08:15:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@968 -- # kill 82126 00:19:50.837 08:15:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@973 -- # wait 82126 00:19:51.095 08:15:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:51.354 [2024-06-10 08:15:13.065723] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.354 08:15:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82266 00:19:51.354 08:15:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:51.354 08:15:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82266 /var/tmp/bdevperf.sock 00:19:51.354 08:15:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@830 -- # '[' -z 82266 ']' 00:19:51.354 08:15:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.354 08:15:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:51.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.354 08:15:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.354 08:15:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:51.354 08:15:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:51.354 [2024-06-10 08:15:13.131095] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:19:51.354 [2024-06-10 08:15:13.131227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82266 ] 00:19:51.612 [2024-06-10 08:15:13.262071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.612 [2024-06-10 08:15:13.350584] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.612 [2024-06-10 08:15:13.407179] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:52.569 08:15:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:52.569 08:15:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@863 -- # return 0 00:19:52.569 08:15:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:52.569 08:15:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:52.828 NVMe0n1 00:19:52.828 08:15:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:52.828 08:15:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82290 00:19:52.828 08:15:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:52.828 Running I/O for 10 seconds... 00:19:53.763 08:15:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:54.025 [2024-06-10 08:15:15.822173] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822231] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822241] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822248] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822255] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822263] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822270] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822278] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822285] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822292] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822299] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822306] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822313] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822320] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822327] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822334] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822342] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822349] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822356] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822363] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822370] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822377] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822384] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822390] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822397] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822404] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822410] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822417] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822434] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822442] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822449] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822459] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822466] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822474] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822481] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822489] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822497] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822504] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822511] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822518] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822525] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822532] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.025 [2024-06-10 08:15:15.822539] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822545] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822552] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822559] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822566] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822573] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822580] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822586] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822593] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822600] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822607] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822614] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822622] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822630] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822638] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822645] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822652] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822659] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822666] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822673] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822681] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822688] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822695] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822702] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822708] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822715] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822722] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822728] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822735] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822742] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822749] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822756] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822762] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822769] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822776] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822783] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822790] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822807] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822815] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822839] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822847] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822855] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822862] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822869] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822877] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822885] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822892] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822899] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822905] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822913] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822920] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822927] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822935] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822942] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822949] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822956] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822963] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822970] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822978] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822985] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822992] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.822999] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.823007] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.823015] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.823022] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.823029] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.823036] tcp.c:1673:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239c20 is same with the state(5) to be set 00:19:54.026 [2024-06-10 08:15:15.823089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.026 [2024-06-10 08:15:15.823118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.026 [2024-06-10 08:15:15.823138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.026 [2024-06-10 08:15:15.823147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.026 [2024-06-10 08:15:15.823158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.026 [2024-06-10 08:15:15.823167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.026 [2024-06-10 08:15:15.823177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.026 [2024-06-10 08:15:15.823199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.026 [2024-06-10 08:15:15.823209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.026 [2024-06-10 08:15:15.823217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.026 [2024-06-10 08:15:15.823227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.026 [2024-06-10 08:15:15.823235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.026 [2024-06-10 08:15:15.823244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.026 [2024-06-10 08:15:15.823252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.026 [2024-06-10 08:15:15.823262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.026 [2024-06-10 08:15:15.823270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.823984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.823992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.824001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.824010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.027 [2024-06-10 08:15:15.824020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.027 [2024-06-10 08:15:15.824029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:73416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:73504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:73584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.028 [2024-06-10 08:15:15.824775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:73688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.028 [2024-06-10 08:15:15.824784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.824803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.824813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.824839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.824847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.824858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.824866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.824876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.824884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.824893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.824901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.824911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.824919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.824928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.824936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.824946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.824954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.824963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.824976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.824986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.824995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.825013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.825040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.825078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.825097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.825115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.825139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.825158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.825176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.825194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.825212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.825230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.825249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.029 [2024-06-10 08:15:15.825267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.029 [2024-06-10 08:15:15.825285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.029 [2024-06-10 08:15:15.825309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.029 [2024-06-10 08:15:15.825343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.029 [2024-06-10 08:15:15.825375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.029 [2024-06-10 08:15:15.825393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.029 [2024-06-10 08:15:15.825409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.029 [2024-06-10 08:15:15.825426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.029 [2024-06-10 08:15:15.825444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.029 [2024-06-10 08:15:15.825467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.029 [2024-06-10 08:15:15.825484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.029 [2024-06-10 08:15:15.825494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.029 [2024-06-10 08:15:15.825502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.030 [2024-06-10 08:15:15.825512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.030 [2024-06-10 08:15:15.825520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.030 [2024-06-10 08:15:15.825529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.030 [2024-06-10 08:15:15.825537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.030 [2024-06-10 08:15:15.825546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.030 [2024-06-10 08:15:15.825554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.030 [2024-06-10 08:15:15.825564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.030 [2024-06-10 08:15:15.825572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.030 [2024-06-10 08:15:15.825582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.030 [2024-06-10 08:15:15.825590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.030 [2024-06-10 08:15:15.825599] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3e10 is same with the state(5) to be set 00:19:54.030 [2024-06-10 08:15:15.825609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:54.030 [2024-06-10 08:15:15.825616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:54.030 [2024-06-10 08:15:15.825629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73888 len:8 PRP1 0x0 PRP2 0x0 00:19:54.030 [2024-06-10 08:15:15.825637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.030 [2024-06-10 08:15:15.825687] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12e3e10 was disconnected and freed. reset controller. 00:19:54.030 [2024-06-10 08:15:15.825951] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:54.030 [2024-06-10 08:15:15.826021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1272710 (9): Bad file descriptor 00:19:54.030 [2024-06-10 08:15:15.826135] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:54.030 [2024-06-10 08:15:15.826154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1272710 with addr=10.0.0.2, port=4420 00:19:54.030 [2024-06-10 08:15:15.826164] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272710 is same with the state(5) to be set 00:19:54.030 [2024-06-10 08:15:15.826180] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1272710 (9): Bad file descriptor 00:19:54.030 [2024-06-10 08:15:15.826195] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:54.030 [2024-06-10 08:15:15.826204] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:54.030 [2024-06-10 08:15:15.826213] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:54.030 [2024-06-10 08:15:15.826247] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:54.030 [2024-06-10 08:15:15.826262] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:54.030 08:15:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:54.966 [2024-06-10 08:15:16.826400] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:54.966 [2024-06-10 08:15:16.826457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1272710 with addr=10.0.0.2, port=4420 00:19:54.966 [2024-06-10 08:15:16.826473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272710 is same with the state(5) to be set 00:19:54.966 [2024-06-10 08:15:16.826498] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1272710 (9): Bad file descriptor 00:19:54.966 [2024-06-10 08:15:16.826532] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:54.966 [2024-06-10 08:15:16.826557] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:54.966 [2024-06-10 08:15:16.826566] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:54.966 [2024-06-10 08:15:16.826591] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:54.966 [2024-06-10 08:15:16.826602] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:55.225 08:15:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.225 [2024-06-10 08:15:17.066905] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.225 08:15:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 82290 00:19:56.159 [2024-06-10 08:15:17.846202] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:04.273 00:20:04.273 Latency(us) 00:20:04.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.273 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:04.273 Verification LBA range: start 0x0 length 0x4000 00:20:04.273 NVMe0n1 : 10.01 6768.34 26.44 0.00 0.00 18884.92 1563.93 3035150.89 00:20:04.273 =================================================================================================================== 00:20:04.273 Total : 6768.34 26.44 0.00 0.00 18884.92 1563.93 3035150.89 00:20:04.273 0 00:20:04.273 08:15:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82400 00:20:04.273 08:15:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:20:04.273 08:15:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:04.273 Running I/O for 10 seconds... 00:20:04.273 08:15:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:04.273 [2024-06-10 08:15:25.971899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:04.273 [2024-06-10 08:15:25.971975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.273 [2024-06-10 08:15:25.972005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:04.273 [2024-06-10 08:15:25.972015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.273 [2024-06-10 08:15:25.972025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:04.273 [2024-06-10 08:15:25.972034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.273 [2024-06-10 08:15:25.972044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:04.273 [2024-06-10 08:15:25.972052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.273 [2024-06-10 08:15:25.972061] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272710 is same with the state(5) to be set 00:20:04.273 [2024-06-10 08:15:25.972359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.273 [2024-06-10 08:15:25.972378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.273 [2024-06-10 08:15:25.972398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.273 [2024-06-10 08:15:25.972417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.273 [2024-06-10 08:15:25.972429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.273 [2024-06-10 08:15:25.972438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.273 [2024-06-10 08:15:25.972449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.273 [2024-06-10 08:15:25.972459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.273 [2024-06-10 08:15:25.972470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.273 [2024-06-10 08:15:25.972479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.273 [2024-06-10 08:15:25.972490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.273 [2024-06-10 08:15:25.972499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.273 [2024-06-10 08:15:25.972510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.273 [2024-06-10 08:15:25.972519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.273 [2024-06-10 08:15:25.972530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.273 [2024-06-10 08:15:25.972539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.273 [2024-06-10 08:15:25.972550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.273 [2024-06-10 08:15:25.972559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.273 [2024-06-10 08:15:25.972570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.273 [2024-06-10 08:15:25.972579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.273 [2024-06-10 08:15:25.972590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.273 [2024-06-10 08:15:25.972598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.273 [2024-06-10 08:15:25.972610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.273 [2024-06-10 08:15:25.972619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.273 [2024-06-10 08:15:25.972630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.273 [2024-06-10 08:15:25.972642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.273 [2024-06-10 08:15:25.972653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.273 [2024-06-10 08:15:25.972663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.273 [2024-06-10 08:15:25.972673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.273 [2024-06-10 08:15:25.972683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.273 [2024-06-10 08:15:25.972694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.273 [2024-06-10 08:15:25.972703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.972727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.972736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.972746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.972755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.972765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.972774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.972784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.972808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.972820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.972829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.972839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.972848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.972859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.972882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.972895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.972915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.972927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.972936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.972947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.972958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.972969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.972978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.972998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.274 [2024-06-10 08:15:25.973584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.274 [2024-06-10 08:15:25.973595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.973603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.973614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.973623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.973634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.973643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.973654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.973664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.973675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.973684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.973694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.973703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.973714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.973723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.973733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.973742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.973753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.973761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.973772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.973783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.973793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.973812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.973823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.973832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.973858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.973867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.973878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.973887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.973898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.973907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.973917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.973928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.973939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.973948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.973959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.973968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.973978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.973987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.973998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.974008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.974019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.974028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.974039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.974048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.974061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.974070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.974081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.974090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.974101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.974110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.974122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.974131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.974141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.974150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.974161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.974170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.974181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.974202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.974213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.974222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.974233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.974241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.974252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.974262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.974273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.974282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.974294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.974303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.974313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.974322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.974334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.974343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.974354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.974363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.974374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.974383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.974394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.974403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.275 [2024-06-10 08:15:25.974414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.275 [2024-06-10 08:15:25.974423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.276 [2024-06-10 08:15:25.974444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.276 [2024-06-10 08:15:25.974464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.276 [2024-06-10 08:15:25.974484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.276 [2024-06-10 08:15:25.974505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.276 [2024-06-10 08:15:25.974525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.276 [2024-06-10 08:15:25.974545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.276 [2024-06-10 08:15:25.974564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.276 [2024-06-10 08:15:25.974584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.276 [2024-06-10 08:15:25.974605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.276 [2024-06-10 08:15:25.974625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.276 [2024-06-10 08:15:25.974645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.276 [2024-06-10 08:15:25.974666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.276 [2024-06-10 08:15:25.974685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.276 [2024-06-10 08:15:25.974705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.276 [2024-06-10 08:15:25.974724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.276 [2024-06-10 08:15:25.974744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.276 [2024-06-10 08:15:25.974764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.276 [2024-06-10 08:15:25.974783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.276 [2024-06-10 08:15:25.974815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.276 [2024-06-10 08:15:25.974836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.276 [2024-06-10 08:15:25.974856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.276 [2024-06-10 08:15:25.974877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.276 [2024-06-10 08:15:25.974908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.276 [2024-06-10 08:15:25.974929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.276 [2024-06-10 08:15:25.974949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.276 [2024-06-10 08:15:25.974970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.974981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.276 [2024-06-10 08:15:25.974990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.975001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.276 [2024-06-10 08:15:25.975010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.975022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.276 [2024-06-10 08:15:25.975031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.975042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.276 [2024-06-10 08:15:25.975050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.975062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.276 [2024-06-10 08:15:25.975070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.975081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.276 [2024-06-10 08:15:25.975090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.975100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1980 is same with the state(5) to be set 00:20:04.276 [2024-06-10 08:15:25.975111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:04.276 [2024-06-10 08:15:25.975119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:04.276 [2024-06-10 08:15:25.975127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65688 len:8 PRP1 0x0 PRP2 0x0 00:20:04.276 [2024-06-10 08:15:25.975136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.276 [2024-06-10 08:15:25.975189] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12e1980 was disconnected and freed. reset controller. 00:20:04.276 [2024-06-10 08:15:25.975411] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:04.276 [2024-06-10 08:15:25.975432] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1272710 (9): Bad file descriptor 00:20:04.276 [2024-06-10 08:15:25.975520] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:04.276 [2024-06-10 08:15:25.975557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1272710 with addr=10.0.0.2, port=4420 00:20:04.276 [2024-06-10 08:15:25.975567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272710 is same with the state(5) to be set 00:20:04.276 [2024-06-10 08:15:25.975584] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1272710 (9): Bad file descriptor 00:20:04.276 [2024-06-10 08:15:25.975599] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:04.276 [2024-06-10 08:15:25.975615] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:04.276 [2024-06-10 08:15:25.975625] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:04.276 [2024-06-10 08:15:25.975643] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:04.276 [2024-06-10 08:15:25.975653] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:04.276 08:15:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:20:05.212 [2024-06-10 08:15:26.975768] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.212 [2024-06-10 08:15:26.975858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1272710 with addr=10.0.0.2, port=4420 00:20:05.212 [2024-06-10 08:15:26.975875] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272710 is same with the state(5) to be set 00:20:05.212 [2024-06-10 08:15:26.975898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1272710 (9): Bad file descriptor 00:20:05.212 [2024-06-10 08:15:26.975916] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:05.212 [2024-06-10 08:15:26.975926] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:05.212 [2024-06-10 08:15:26.975937] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:05.212 [2024-06-10 08:15:26.975960] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:05.212 [2024-06-10 08:15:26.975972] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:06.147 [2024-06-10 08:15:27.976059] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.147 [2024-06-10 08:15:27.976129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1272710 with addr=10.0.0.2, port=4420 00:20:06.147 [2024-06-10 08:15:27.976143] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272710 is same with the state(5) to be set 00:20:06.147 [2024-06-10 08:15:27.976161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1272710 (9): Bad file descriptor 00:20:06.147 [2024-06-10 08:15:27.976176] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:06.147 [2024-06-10 08:15:27.976185] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:06.147 [2024-06-10 08:15:27.976193] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:06.147 [2024-06-10 08:15:27.976212] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.147 [2024-06-10 08:15:27.976222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:07.523 [2024-06-10 08:15:28.979504] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.523 [2024-06-10 08:15:28.979582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1272710 with addr=10.0.0.2, port=4420 00:20:07.523 [2024-06-10 08:15:28.979598] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272710 is same with the state(5) to be set 00:20:07.523 [2024-06-10 08:15:28.979871] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1272710 (9): Bad file descriptor 00:20:07.523 [2024-06-10 08:15:28.980109] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:07.523 [2024-06-10 08:15:28.980123] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:07.523 [2024-06-10 08:15:28.980133] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:07.523 [2024-06-10 08:15:28.983743] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.523 [2024-06-10 08:15:28.983809] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:07.523 08:15:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:07.523 [2024-06-10 08:15:29.223355] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.523 08:15:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 82400 00:20:08.479 [2024-06-10 08:15:30.016139] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:13.743 00:20:13.743 Latency(us) 00:20:13.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.744 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:13.744 Verification LBA range: start 0x0 length 0x4000 00:20:13.744 NVMe0n1 : 10.01 5545.71 21.66 3739.50 0.00 13758.59 629.29 3019898.88 00:20:13.744 =================================================================================================================== 00:20:13.744 Total : 5545.71 21.66 3739.50 0.00 13758.59 0.00 3019898.88 00:20:13.744 0 00:20:13.744 08:15:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82266 00:20:13.744 08:15:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@949 -- # '[' -z 82266 ']' 00:20:13.744 08:15:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # kill -0 82266 00:20:13.744 08:15:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # uname 00:20:13.744 08:15:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:13.744 08:15:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 82266 00:20:13.744 killing process with pid 82266 00:20:13.744 Received shutdown signal, test time was about 10.000000 seconds 00:20:13.744 00:20:13.744 Latency(us) 00:20:13.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.744 =================================================================================================================== 00:20:13.744 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:13.744 08:15:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:13.744 08:15:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:13.744 08:15:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # echo 'killing process with pid 82266' 00:20:13.744 08:15:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@968 -- # kill 82266 00:20:13.744 08:15:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@973 -- # wait 82266 00:20:13.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:13.744 08:15:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82509 00:20:13.744 08:15:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:20:13.744 08:15:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82509 /var/tmp/bdevperf.sock 00:20:13.744 08:15:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@830 -- # '[' -z 82509 ']' 00:20:13.744 08:15:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:13.744 08:15:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:13.744 08:15:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:13.744 08:15:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:13.744 08:15:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:13.744 [2024-06-10 08:15:35.177412] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:20:13.744 [2024-06-10 08:15:35.177547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82509 ] 00:20:13.744 [2024-06-10 08:15:35.317691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.744 [2024-06-10 08:15:35.427358] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.744 [2024-06-10 08:15:35.480940] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:14.311 08:15:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:14.311 08:15:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@863 -- # return 0 00:20:14.311 08:15:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82525 00:20:14.311 08:15:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82509 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:20:14.311 08:15:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:20:14.569 08:15:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:14.827 NVMe0n1 00:20:15.085 08:15:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82565 00:20:15.085 08:15:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:15.085 08:15:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:20:15.085 Running I/O for 10 seconds... 00:20:16.020 08:15:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:16.280 [2024-06-10 08:15:37.918230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:88192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.918982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.918991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.919001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.919010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.919020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.919029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.919040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.919049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.919060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.919068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.280 [2024-06-10 08:15:37.919079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.280 [2024-06-10 08:15:37.919087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:127896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.281 [2024-06-10 08:15:37.919944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.281 [2024-06-10 08:15:37.919953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.919964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.919973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.919984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:68152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.919993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:118912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:68744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.282 [2024-06-10 08:15:37.920783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.282 [2024-06-10 08:15:37.920792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.283 [2024-06-10 08:15:37.920830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.283 [2024-06-10 08:15:37.920840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.283 [2024-06-10 08:15:37.920851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.283 [2024-06-10 08:15:37.920860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.283 [2024-06-10 08:15:37.920871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.283 [2024-06-10 08:15:37.920880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.283 [2024-06-10 08:15:37.920891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.283 [2024-06-10 08:15:37.920899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.283 [2024-06-10 08:15:37.920910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.283 [2024-06-10 08:15:37.920918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.283 [2024-06-10 08:15:37.920930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.283 [2024-06-10 08:15:37.920939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.283 [2024-06-10 08:15:37.920950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.283 [2024-06-10 08:15:37.920958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.283 [2024-06-10 08:15:37.920969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.283 [2024-06-10 08:15:37.920978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.283 [2024-06-10 08:15:37.920989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.283 [2024-06-10 08:15:37.921009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.283 [2024-06-10 08:15:37.921019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.283 [2024-06-10 08:15:37.921027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.283 [2024-06-10 08:15:37.921037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.283 [2024-06-10 08:15:37.921046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.283 [2024-06-10 08:15:37.921056] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a53740 is same with the state(5) to be set 00:20:16.283 [2024-06-10 08:15:37.921093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:16.283 [2024-06-10 08:15:37.921102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:16.283 [2024-06-10 08:15:37.921110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66176 len:8 PRP1 0x0 PRP2 0x0 00:20:16.283 [2024-06-10 08:15:37.921119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.283 [2024-06-10 08:15:37.921173] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a53740 was disconnected and freed. reset controller. 00:20:16.283 [2024-06-10 08:15:37.921438] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:16.283 [2024-06-10 08:15:37.921520] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e5890 (9): Bad file descriptor 00:20:16.283 [2024-06-10 08:15:37.921627] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:16.283 [2024-06-10 08:15:37.921648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e5890 with addr=10.0.0.2, port=4420 00:20:16.283 [2024-06-10 08:15:37.921659] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e5890 is same with the state(5) to be set 00:20:16.283 [2024-06-10 08:15:37.921677] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e5890 (9): Bad file descriptor 00:20:16.283 [2024-06-10 08:15:37.921693] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:16.283 [2024-06-10 08:15:37.921702] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:16.283 [2024-06-10 08:15:37.921713] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:16.283 [2024-06-10 08:15:37.921734] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:16.283 [2024-06-10 08:15:37.921744] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:16.283 08:15:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 82565 00:20:18.182 [2024-06-10 08:15:39.922120] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.182 [2024-06-10 08:15:39.922223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e5890 with addr=10.0.0.2, port=4420 00:20:18.182 [2024-06-10 08:15:39.922247] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e5890 is same with the state(5) to be set 00:20:18.182 [2024-06-10 08:15:39.922285] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e5890 (9): Bad file descriptor 00:20:18.182 [2024-06-10 08:15:39.922312] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:18.182 [2024-06-10 08:15:39.922325] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:18.182 [2024-06-10 08:15:39.922338] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:18.182 [2024-06-10 08:15:39.922372] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.182 [2024-06-10 08:15:39.922388] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.082 [2024-06-10 08:15:41.922649] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.082 [2024-06-10 08:15:41.922748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e5890 with addr=10.0.0.2, port=4420 00:20:20.082 [2024-06-10 08:15:41.922769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e5890 is same with the state(5) to be set 00:20:20.082 [2024-06-10 08:15:41.922823] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e5890 (9): Bad file descriptor 00:20:20.082 [2024-06-10 08:15:41.922853] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.082 [2024-06-10 08:15:41.922867] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.082 [2024-06-10 08:15:41.922881] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.082 [2024-06-10 08:15:41.922916] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.082 [2024-06-10 08:15:41.922931] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.608 [2024-06-10 08:15:43.923062] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.186 00:20:23.186 Latency(us) 00:20:23.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.187 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:23.187 NVMe0n1 : 8.13 2125.63 8.30 15.75 0.00 59724.54 7864.32 7015926.69 00:20:23.187 =================================================================================================================== 00:20:23.187 Total : 2125.63 8.30 15.75 0.00 59724.54 7864.32 7015926.69 00:20:23.187 0 00:20:23.187 08:15:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:23.187 Attaching 5 probes... 00:20:23.187 1207.249940: reset bdev controller NVMe0 00:20:23.187 1207.383006: reconnect bdev controller NVMe0 00:20:23.187 3207.733516: reconnect delay bdev controller NVMe0 00:20:23.187 3207.769857: reconnect bdev controller NVMe0 00:20:23.187 5208.273547: reconnect delay bdev controller NVMe0 00:20:23.187 5208.309414: reconnect bdev controller NVMe0 00:20:23.187 7208.813923: reconnect delay bdev controller NVMe0 00:20:23.187 7208.850797: reconnect bdev controller NVMe0 00:20:23.187 08:15:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:23.187 08:15:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:23.187 08:15:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 82525 00:20:23.187 08:15:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:23.187 08:15:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82509 00:20:23.187 08:15:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@949 -- # '[' -z 82509 ']' 00:20:23.187 08:15:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # kill -0 82509 00:20:23.187 08:15:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # uname 00:20:23.187 08:15:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:23.187 08:15:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 82509 00:20:23.187 killing process with pid 82509 00:20:23.187 Received shutdown signal, test time was about 8.181890 seconds 00:20:23.187 00:20:23.187 Latency(us) 00:20:23.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.187 =================================================================================================================== 00:20:23.187 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:23.187 08:15:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:23.187 08:15:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:23.187 08:15:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # echo 'killing process with pid 82509' 00:20:23.187 08:15:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@968 -- # kill 82509 00:20:23.187 08:15:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@973 -- # wait 82509 00:20:23.481 08:15:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:23.741 08:15:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:23.741 08:15:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:23.741 08:15:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:23.741 08:15:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:20:23.741 08:15:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:23.741 08:15:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:20:23.741 08:15:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:23.741 08:15:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:23.741 rmmod nvme_tcp 00:20:23.741 rmmod nvme_fabrics 00:20:23.741 rmmod nvme_keyring 00:20:24.000 08:15:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:24.000 08:15:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:20:24.000 08:15:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:20:24.000 08:15:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 82077 ']' 00:20:24.000 08:15:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 82077 00:20:24.000 08:15:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@949 -- # '[' -z 82077 ']' 00:20:24.000 08:15:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # kill -0 82077 00:20:24.000 08:15:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # uname 00:20:24.000 08:15:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:24.000 08:15:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 82077 00:20:24.000 killing process with pid 82077 00:20:24.000 08:15:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:24.000 08:15:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:24.000 08:15:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # echo 'killing process with pid 82077' 00:20:24.000 08:15:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@968 -- # kill 82077 00:20:24.000 08:15:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@973 -- # wait 82077 00:20:24.258 08:15:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:24.258 08:15:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:24.258 08:15:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:24.258 08:15:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:24.258 08:15:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:24.258 08:15:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.258 08:15:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.258 08:15:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.258 08:15:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:24.258 ************************************ 00:20:24.258 END TEST nvmf_timeout 00:20:24.258 ************************************ 00:20:24.258 00:20:24.258 real 0m47.162s 00:20:24.258 user 2m18.264s 00:20:24.258 sys 0m5.818s 00:20:24.258 08:15:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:24.258 08:15:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:24.258 08:15:45 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:20:24.258 08:15:45 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:20:24.258 08:15:45 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:24.258 08:15:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:24.258 08:15:46 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:20:24.258 00:20:24.258 real 12m12.924s 00:20:24.258 user 29m37.525s 00:20:24.258 sys 3m10.131s 00:20:24.258 08:15:46 nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:24.258 08:15:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:24.258 ************************************ 00:20:24.258 END TEST nvmf_tcp 00:20:24.258 ************************************ 00:20:24.258 08:15:46 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:20:24.258 08:15:46 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:24.258 08:15:46 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:24.258 08:15:46 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:24.258 08:15:46 -- common/autotest_common.sh@10 -- # set +x 00:20:24.258 ************************************ 00:20:24.258 START TEST nvmf_dif 00:20:24.258 ************************************ 00:20:24.258 08:15:46 nvmf_dif -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:24.517 * Looking for test storage... 00:20:24.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:24.517 08:15:46 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:24.517 08:15:46 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.517 08:15:46 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.517 08:15:46 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.517 08:15:46 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.517 08:15:46 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.517 08:15:46 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.517 08:15:46 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:24.517 08:15:46 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:24.517 08:15:46 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:24.517 08:15:46 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:24.517 08:15:46 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:24.517 08:15:46 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:24.517 08:15:46 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.517 08:15:46 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:24.517 08:15:46 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:24.517 Cannot find device "nvmf_tgt_br" 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@155 -- # true 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:24.517 Cannot find device "nvmf_tgt_br2" 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@156 -- # true 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:24.517 Cannot find device "nvmf_tgt_br" 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@158 -- # true 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:24.517 Cannot find device "nvmf_tgt_br2" 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@159 -- # true 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:24.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:24.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:24.517 08:15:46 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:24.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:20:24.775 00:20:24.775 --- 10.0.0.2 ping statistics --- 00:20:24.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.775 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:24.775 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:24.775 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:20:24.775 00:20:24.775 --- 10.0.0.3 ping statistics --- 00:20:24.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.775 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:24.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:20:24.775 00:20:24.775 --- 10.0.0.1 ping statistics --- 00:20:24.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.775 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:20:24.775 08:15:46 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:25.033 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:25.033 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:25.292 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:25.292 08:15:46 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.292 08:15:46 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:25.292 08:15:46 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:25.292 08:15:46 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.292 08:15:46 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:25.292 08:15:46 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:25.292 08:15:46 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:25.292 08:15:46 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:25.292 08:15:46 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:25.292 08:15:46 nvmf_dif -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:25.292 08:15:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:25.292 08:15:46 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=83003 00:20:25.292 08:15:46 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 83003 00:20:25.292 08:15:46 nvmf_dif -- common/autotest_common.sh@830 -- # '[' -z 83003 ']' 00:20:25.292 08:15:46 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:25.292 08:15:46 nvmf_dif -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.292 08:15:46 nvmf_dif -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:25.292 08:15:46 nvmf_dif -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.292 08:15:46 nvmf_dif -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:25.292 08:15:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:25.292 [2024-06-10 08:15:47.029379] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:20:25.292 [2024-06-10 08:15:47.029488] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.550 [2024-06-10 08:15:47.170939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.550 [2024-06-10 08:15:47.284294] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.550 [2024-06-10 08:15:47.284359] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.550 [2024-06-10 08:15:47.284372] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.550 [2024-06-10 08:15:47.284380] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.550 [2024-06-10 08:15:47.284388] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.550 [2024-06-10 08:15:47.284415] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.550 [2024-06-10 08:15:47.339890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:26.116 08:15:47 nvmf_dif -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:26.116 08:15:47 nvmf_dif -- common/autotest_common.sh@863 -- # return 0 00:20:26.116 08:15:47 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:26.116 08:15:47 nvmf_dif -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:26.116 08:15:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:26.375 08:15:48 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.375 08:15:48 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:26.375 08:15:48 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:26.375 08:15:48 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.375 08:15:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:26.375 [2024-06-10 08:15:48.009938] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.375 08:15:48 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.375 08:15:48 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:26.375 08:15:48 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:26.375 08:15:48 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:26.375 08:15:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:26.375 ************************************ 00:20:26.375 START TEST fio_dif_1_default 00:20:26.375 ************************************ 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # fio_dif_1 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:26.375 bdev_null0 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:26.375 [2024-06-10 08:15:48.058080] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:26.375 { 00:20:26.375 "params": { 00:20:26.375 "name": "Nvme$subsystem", 00:20:26.375 "trtype": "$TEST_TRANSPORT", 00:20:26.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.375 "adrfam": "ipv4", 00:20:26.375 "trsvcid": "$NVMF_PORT", 00:20:26.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.375 "hdgst": ${hdgst:-false}, 00:20:26.375 "ddgst": ${ddgst:-false} 00:20:26.375 }, 00:20:26.375 "method": "bdev_nvme_attach_controller" 00:20:26.375 } 00:20:26.375 EOF 00:20:26.375 )") 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1355 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local sanitizers 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # shift 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local asan_lib= 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libasan 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:26.375 "params": { 00:20:26.375 "name": "Nvme0", 00:20:26.375 "trtype": "tcp", 00:20:26.375 "traddr": "10.0.0.2", 00:20:26.375 "adrfam": "ipv4", 00:20:26.375 "trsvcid": "4420", 00:20:26.375 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:26.375 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:26.375 "hdgst": false, 00:20:26.375 "ddgst": false 00:20:26.375 }, 00:20:26.375 "method": "bdev_nvme_attach_controller" 00:20:26.375 }' 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:26.375 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:20:26.376 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:20:26.376 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:20:26.376 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:20:26.376 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:26.376 08:15:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:26.634 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:26.634 fio-3.35 00:20:26.634 Starting 1 thread 00:20:38.827 00:20:38.827 filename0: (groupid=0, jobs=1): err= 0: pid=83064: Mon Jun 10 08:15:58 2024 00:20:38.827 read: IOPS=8299, BW=32.4MiB/s (34.0MB/s)(324MiB/10001msec) 00:20:38.827 slat (usec): min=7, max=296, avg= 8.72, stdev= 3.11 00:20:38.827 clat (usec): min=393, max=4126, avg=456.11, stdev=43.10 00:20:38.827 lat (usec): min=401, max=4160, avg=464.83, stdev=43.48 00:20:38.827 clat percentiles (usec): 00:20:38.827 | 1.00th=[ 400], 5.00th=[ 412], 10.00th=[ 420], 20.00th=[ 433], 00:20:38.827 | 30.00th=[ 441], 40.00th=[ 449], 50.00th=[ 457], 60.00th=[ 461], 00:20:38.827 | 70.00th=[ 469], 80.00th=[ 478], 90.00th=[ 486], 95.00th=[ 494], 00:20:38.827 | 99.00th=[ 562], 99.50th=[ 586], 99.90th=[ 922], 99.95th=[ 971], 00:20:38.827 | 99.99th=[ 1418] 00:20:38.827 bw ( KiB/s): min=31136, max=35680, per=100.00%, avg=33258.11, stdev=1354.29, samples=19 00:20:38.827 iops : min= 7784, max= 8920, avg=8314.53, stdev=338.57, samples=19 00:20:38.827 lat (usec) : 500=96.23%, 750=3.65%, 1000=0.09% 00:20:38.827 lat (msec) : 2=0.03%, 10=0.01% 00:20:38.827 cpu : usr=84.32%, sys=13.55%, ctx=75, majf=0, minf=0 00:20:38.827 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:38.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:38.827 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:38.827 issued rwts: total=83004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:38.827 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:38.827 00:20:38.827 Run status group 0 (all jobs): 00:20:38.827 READ: bw=32.4MiB/s (34.0MB/s), 32.4MiB/s-32.4MiB/s (34.0MB/s-34.0MB/s), io=324MiB (340MB), run=10001-10001msec 00:20:38.827 08:15:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:38.827 08:15:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:38.827 08:15:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:38.827 08:15:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:38.827 08:15:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:38.827 08:15:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:38.827 08:15:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.827 08:15:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:38.827 08:15:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.827 08:15:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:38.827 08:15:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.827 08:15:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:38.827 08:15:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.827 00:20:38.827 real 0m11.046s 00:20:38.827 user 0m9.107s 00:20:38.827 sys 0m1.647s 00:20:38.827 08:15:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:38.827 08:15:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:38.827 ************************************ 00:20:38.827 END TEST fio_dif_1_default 00:20:38.827 ************************************ 00:20:38.827 08:15:59 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:38.827 08:15:59 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:38.827 08:15:59 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:38.827 08:15:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:38.827 ************************************ 00:20:38.827 START TEST fio_dif_1_multi_subsystems 00:20:38.827 ************************************ 00:20:38.827 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # fio_dif_1_multi_subsystems 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:38.828 bdev_null0 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:38.828 [2024-06-10 08:15:59.152024] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:38.828 bdev_null1 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:38.828 { 00:20:38.828 "params": { 00:20:38.828 "name": "Nvme$subsystem", 00:20:38.828 "trtype": "$TEST_TRANSPORT", 00:20:38.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.828 "adrfam": "ipv4", 00:20:38.828 "trsvcid": "$NVMF_PORT", 00:20:38.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.828 "hdgst": ${hdgst:-false}, 00:20:38.828 "ddgst": ${ddgst:-false} 00:20:38.828 }, 00:20:38.828 "method": "bdev_nvme_attach_controller" 00:20:38.828 } 00:20:38.828 EOF 00:20:38.828 )") 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1355 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local sanitizers 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # shift 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local asan_lib= 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libasan 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:38.828 { 00:20:38.828 "params": { 00:20:38.828 "name": "Nvme$subsystem", 00:20:38.828 "trtype": "$TEST_TRANSPORT", 00:20:38.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.828 "adrfam": "ipv4", 00:20:38.828 "trsvcid": "$NVMF_PORT", 00:20:38.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.828 "hdgst": ${hdgst:-false}, 00:20:38.828 "ddgst": ${ddgst:-false} 00:20:38.828 }, 00:20:38.828 "method": "bdev_nvme_attach_controller" 00:20:38.828 } 00:20:38.828 EOF 00:20:38.828 )") 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:20:38.828 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:38.828 "params": { 00:20:38.828 "name": "Nvme0", 00:20:38.828 "trtype": "tcp", 00:20:38.828 "traddr": "10.0.0.2", 00:20:38.828 "adrfam": "ipv4", 00:20:38.828 "trsvcid": "4420", 00:20:38.828 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:38.828 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:38.828 "hdgst": false, 00:20:38.828 "ddgst": false 00:20:38.828 }, 00:20:38.828 "method": "bdev_nvme_attach_controller" 00:20:38.828 },{ 00:20:38.828 "params": { 00:20:38.828 "name": "Nvme1", 00:20:38.828 "trtype": "tcp", 00:20:38.828 "traddr": "10.0.0.2", 00:20:38.828 "adrfam": "ipv4", 00:20:38.829 "trsvcid": "4420", 00:20:38.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.829 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.829 "hdgst": false, 00:20:38.829 "ddgst": false 00:20:38.829 }, 00:20:38.829 "method": "bdev_nvme_attach_controller" 00:20:38.829 }' 00:20:38.829 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:20:38.829 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:20:38.829 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:20:38.829 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:38.829 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:20:38.829 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:20:38.829 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:20:38.829 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:20:38.829 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:38.829 08:15:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:38.829 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:38.829 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:38.829 fio-3.35 00:20:38.829 Starting 2 threads 00:20:48.805 00:20:48.805 filename0: (groupid=0, jobs=1): err= 0: pid=83223: Mon Jun 10 08:16:09 2024 00:20:48.805 read: IOPS=4426, BW=17.3MiB/s (18.1MB/s)(173MiB/10001msec) 00:20:48.805 slat (nsec): min=6580, max=80000, avg=18013.23, stdev=6870.49 00:20:48.805 clat (usec): min=623, max=2481, avg=855.75, stdev=57.43 00:20:48.805 lat (usec): min=629, max=2503, avg=873.77, stdev=59.76 00:20:48.805 clat percentiles (usec): 00:20:48.805 | 1.00th=[ 734], 5.00th=[ 775], 10.00th=[ 791], 20.00th=[ 816], 00:20:48.805 | 30.00th=[ 832], 40.00th=[ 840], 50.00th=[ 857], 60.00th=[ 865], 00:20:48.805 | 70.00th=[ 881], 80.00th=[ 898], 90.00th=[ 914], 95.00th=[ 938], 00:20:48.805 | 99.00th=[ 1004], 99.50th=[ 1029], 99.90th=[ 1156], 99.95th=[ 1713], 00:20:48.805 | 99.99th=[ 1876] 00:20:48.805 bw ( KiB/s): min=16960, max=19072, per=50.06%, avg=17728.00, stdev=601.04, samples=19 00:20:48.805 iops : min= 4240, max= 4768, avg=4432.00, stdev=150.26, samples=19 00:20:48.805 lat (usec) : 750=1.72%, 1000=97.29% 00:20:48.805 lat (msec) : 2=0.98%, 4=0.01% 00:20:48.805 cpu : usr=90.05%, sys=8.40%, ctx=16, majf=0, minf=0 00:20:48.805 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:48.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.805 issued rwts: total=44268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.805 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:48.805 filename1: (groupid=0, jobs=1): err= 0: pid=83224: Mon Jun 10 08:16:09 2024 00:20:48.805 read: IOPS=4426, BW=17.3MiB/s (18.1MB/s)(173MiB/10001msec) 00:20:48.805 slat (nsec): min=6209, max=73699, avg=18797.34, stdev=7515.48 00:20:48.805 clat (usec): min=644, max=2487, avg=853.04, stdev=53.46 00:20:48.805 lat (usec): min=657, max=2511, avg=871.83, stdev=55.72 00:20:48.806 clat percentiles (usec): 00:20:48.806 | 1.00th=[ 750], 5.00th=[ 783], 10.00th=[ 799], 20.00th=[ 816], 00:20:48.806 | 30.00th=[ 832], 40.00th=[ 840], 50.00th=[ 848], 60.00th=[ 857], 00:20:48.806 | 70.00th=[ 873], 80.00th=[ 889], 90.00th=[ 906], 95.00th=[ 930], 00:20:48.806 | 99.00th=[ 996], 99.50th=[ 1029], 99.90th=[ 1156], 99.95th=[ 1713], 00:20:48.806 | 99.99th=[ 1876] 00:20:48.806 bw ( KiB/s): min=16960, max=19072, per=50.06%, avg=17729.68, stdev=601.84, samples=19 00:20:48.806 iops : min= 4240, max= 4768, avg=4432.42, stdev=150.46, samples=19 00:20:48.806 lat (usec) : 750=0.98%, 1000=98.12% 00:20:48.806 lat (msec) : 2=0.89%, 4=0.01% 00:20:48.806 cpu : usr=91.59%, sys=7.16%, ctx=12, majf=0, minf=9 00:20:48.806 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:48.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.806 issued rwts: total=44272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.806 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:48.806 00:20:48.806 Run status group 0 (all jobs): 00:20:48.806 READ: bw=34.6MiB/s (36.3MB/s), 17.3MiB/s-17.3MiB/s (18.1MB/s-18.1MB/s), io=346MiB (363MB), run=10001-10001msec 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.806 00:20:48.806 real 0m11.137s 00:20:48.806 user 0m18.910s 00:20:48.806 sys 0m1.838s 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:48.806 ************************************ 00:20:48.806 END TEST fio_dif_1_multi_subsystems 00:20:48.806 ************************************ 00:20:48.806 08:16:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:48.806 08:16:10 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:48.806 08:16:10 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:48.806 08:16:10 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:48.806 08:16:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:48.806 ************************************ 00:20:48.806 START TEST fio_dif_rand_params 00:20:48.806 ************************************ 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # fio_dif_rand_params 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.806 bdev_null0 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.806 [2024-06-10 08:16:10.344332] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:48.806 { 00:20:48.806 "params": { 00:20:48.806 "name": "Nvme$subsystem", 00:20:48.806 "trtype": "$TEST_TRANSPORT", 00:20:48.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.806 "adrfam": "ipv4", 00:20:48.806 "trsvcid": "$NVMF_PORT", 00:20:48.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.806 "hdgst": ${hdgst:-false}, 00:20:48.806 "ddgst": ${ddgst:-false} 00:20:48.806 }, 00:20:48.806 "method": "bdev_nvme_attach_controller" 00:20:48.806 } 00:20:48.806 EOF 00:20:48.806 )") 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:48.806 08:16:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:48.806 "params": { 00:20:48.806 "name": "Nvme0", 00:20:48.806 "trtype": "tcp", 00:20:48.806 "traddr": "10.0.0.2", 00:20:48.806 "adrfam": "ipv4", 00:20:48.806 "trsvcid": "4420", 00:20:48.806 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:48.806 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:48.807 "hdgst": false, 00:20:48.807 "ddgst": false 00:20:48.807 }, 00:20:48.807 "method": "bdev_nvme_attach_controller" 00:20:48.807 }' 00:20:48.807 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:20:48.807 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:20:48.807 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:20:48.807 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:48.807 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:20:48.807 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:20:48.807 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:20:48.807 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:20:48.807 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:48.807 08:16:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:48.807 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:48.807 ... 00:20:48.807 fio-3.35 00:20:48.807 Starting 3 threads 00:20:55.394 00:20:55.394 filename0: (groupid=0, jobs=1): err= 0: pid=83384: Mon Jun 10 08:16:16 2024 00:20:55.394 read: IOPS=242, BW=30.3MiB/s (31.7MB/s)(152MiB/5005msec) 00:20:55.394 slat (usec): min=6, max=105, avg=19.64, stdev=10.12 00:20:55.394 clat (usec): min=11393, max=20635, avg=12342.57, stdev=749.54 00:20:55.394 lat (usec): min=11401, max=20660, avg=12362.21, stdev=749.69 00:20:55.394 clat percentiles (usec): 00:20:55.394 | 1.00th=[11469], 5.00th=[11600], 10.00th=[12125], 20.00th=[12125], 00:20:55.394 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12256], 60.00th=[12256], 00:20:55.394 | 70.00th=[12387], 80.00th=[12387], 90.00th=[12387], 95.00th=[12518], 00:20:55.394 | 99.00th=[15401], 99.50th=[15533], 99.90th=[20579], 99.95th=[20579], 00:20:55.394 | 99.99th=[20579] 00:20:55.394 bw ( KiB/s): min=29184, max=31488, per=33.28%, avg=30950.40, stdev=728.59, samples=10 00:20:55.394 iops : min= 228, max= 246, avg=241.80, stdev= 5.69, samples=10 00:20:55.394 lat (msec) : 20=99.75%, 50=0.25% 00:20:55.394 cpu : usr=91.65%, sys=7.77%, ctx=5, majf=0, minf=0 00:20:55.394 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:55.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.394 issued rwts: total=1212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.394 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:55.394 filename0: (groupid=0, jobs=1): err= 0: pid=83385: Mon Jun 10 08:16:16 2024 00:20:55.394 read: IOPS=242, BW=30.3MiB/s (31.8MB/s)(152MiB/5002msec) 00:20:55.394 slat (nsec): min=7222, max=56517, avg=21578.14, stdev=10525.95 00:20:55.394 clat (usec): min=11408, max=20537, avg=12328.72, stdev=713.73 00:20:55.394 lat (usec): min=11424, max=20556, avg=12350.30, stdev=713.20 00:20:55.394 clat percentiles (usec): 00:20:55.394 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11994], 20.00th=[12125], 00:20:55.394 | 30.00th=[12256], 40.00th=[12256], 50.00th=[12256], 60.00th=[12256], 00:20:55.394 | 70.00th=[12387], 80.00th=[12387], 90.00th=[12387], 95.00th=[12518], 00:20:55.394 | 99.00th=[15401], 99.50th=[15401], 99.90th=[20579], 99.95th=[20579], 00:20:55.394 | 99.99th=[20579] 00:20:55.394 bw ( KiB/s): min=29184, max=31488, per=33.31%, avg=30976.00, stdev=768.00, samples=9 00:20:55.394 iops : min= 228, max= 246, avg=242.00, stdev= 6.00, samples=9 00:20:55.394 lat (msec) : 20=99.75%, 50=0.25% 00:20:55.394 cpu : usr=91.44%, sys=7.66%, ctx=68, majf=0, minf=0 00:20:55.394 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:55.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.394 issued rwts: total=1212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.394 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:55.394 filename0: (groupid=0, jobs=1): err= 0: pid=83386: Mon Jun 10 08:16:16 2024 00:20:55.394 read: IOPS=242, BW=30.3MiB/s (31.7MB/s)(152MiB/5005msec) 00:20:55.394 slat (nsec): min=6100, max=57303, avg=21312.51, stdev=10531.49 00:20:55.394 clat (usec): min=11401, max=20635, avg=12336.90, stdev=745.61 00:20:55.394 lat (usec): min=11418, max=20662, avg=12358.21, stdev=744.96 00:20:55.394 clat percentiles (usec): 00:20:55.394 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11994], 20.00th=[12125], 00:20:55.394 | 30.00th=[12256], 40.00th=[12256], 50.00th=[12256], 60.00th=[12256], 00:20:55.394 | 70.00th=[12387], 80.00th=[12387], 90.00th=[12387], 95.00th=[12518], 00:20:55.394 | 99.00th=[15401], 99.50th=[15533], 99.90th=[20579], 99.95th=[20579], 00:20:55.394 | 99.99th=[20579] 00:20:55.394 bw ( KiB/s): min=29184, max=31488, per=33.28%, avg=30950.40, stdev=728.59, samples=10 00:20:55.394 iops : min= 228, max= 246, avg=241.80, stdev= 5.69, samples=10 00:20:55.394 lat (msec) : 20=99.75%, 50=0.25% 00:20:55.394 cpu : usr=91.61%, sys=7.77%, ctx=81, majf=0, minf=0 00:20:55.394 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:55.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.394 issued rwts: total=1212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.394 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:55.394 00:20:55.394 Run status group 0 (all jobs): 00:20:55.394 READ: bw=90.8MiB/s (95.2MB/s), 30.3MiB/s-30.3MiB/s (31.7MB/s-31.8MB/s), io=455MiB (477MB), run=5002-5005msec 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:55.394 bdev_null0 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:55.394 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:55.395 [2024-06-10 08:16:16.359075] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:55.395 bdev_null1 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:55.395 bdev_null2 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.395 { 00:20:55.395 "params": { 00:20:55.395 "name": "Nvme$subsystem", 00:20:55.395 "trtype": "$TEST_TRANSPORT", 00:20:55.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.395 "adrfam": "ipv4", 00:20:55.395 "trsvcid": "$NVMF_PORT", 00:20:55.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.395 "hdgst": ${hdgst:-false}, 00:20:55.395 "ddgst": ${ddgst:-false} 00:20:55.395 }, 00:20:55.395 "method": "bdev_nvme_attach_controller" 00:20:55.395 } 00:20:55.395 EOF 00:20:55.395 )") 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.395 { 00:20:55.395 "params": { 00:20:55.395 "name": "Nvme$subsystem", 00:20:55.395 "trtype": "$TEST_TRANSPORT", 00:20:55.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.395 "adrfam": "ipv4", 00:20:55.395 "trsvcid": "$NVMF_PORT", 00:20:55.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.395 "hdgst": ${hdgst:-false}, 00:20:55.395 "ddgst": ${ddgst:-false} 00:20:55.395 }, 00:20:55.395 "method": "bdev_nvme_attach_controller" 00:20:55.395 } 00:20:55.395 EOF 00:20:55.395 )") 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.395 { 00:20:55.395 "params": { 00:20:55.395 "name": "Nvme$subsystem", 00:20:55.395 "trtype": "$TEST_TRANSPORT", 00:20:55.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.395 "adrfam": "ipv4", 00:20:55.395 "trsvcid": "$NVMF_PORT", 00:20:55.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.395 "hdgst": ${hdgst:-false}, 00:20:55.395 "ddgst": ${ddgst:-false} 00:20:55.395 }, 00:20:55.395 "method": "bdev_nvme_attach_controller" 00:20:55.395 } 00:20:55.395 EOF 00:20:55.395 )") 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:55.395 08:16:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:55.395 "params": { 00:20:55.395 "name": "Nvme0", 00:20:55.395 "trtype": "tcp", 00:20:55.395 "traddr": "10.0.0.2", 00:20:55.395 "adrfam": "ipv4", 00:20:55.395 "trsvcid": "4420", 00:20:55.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:55.395 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:55.395 "hdgst": false, 00:20:55.395 "ddgst": false 00:20:55.395 }, 00:20:55.395 "method": "bdev_nvme_attach_controller" 00:20:55.395 },{ 00:20:55.395 "params": { 00:20:55.395 "name": "Nvme1", 00:20:55.395 "trtype": "tcp", 00:20:55.395 "traddr": "10.0.0.2", 00:20:55.395 "adrfam": "ipv4", 00:20:55.395 "trsvcid": "4420", 00:20:55.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:55.396 "hdgst": false, 00:20:55.396 "ddgst": false 00:20:55.396 }, 00:20:55.396 "method": "bdev_nvme_attach_controller" 00:20:55.396 },{ 00:20:55.396 "params": { 00:20:55.396 "name": "Nvme2", 00:20:55.396 "trtype": "tcp", 00:20:55.396 "traddr": "10.0.0.2", 00:20:55.396 "adrfam": "ipv4", 00:20:55.396 "trsvcid": "4420", 00:20:55.396 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:55.396 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:55.396 "hdgst": false, 00:20:55.396 "ddgst": false 00:20:55.396 }, 00:20:55.396 "method": "bdev_nvme_attach_controller" 00:20:55.396 }' 00:20:55.396 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:20:55.396 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:20:55.396 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:20:55.396 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:55.396 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:20:55.396 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:20:55.396 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:20:55.396 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:20:55.396 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:55.396 08:16:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:55.396 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:55.396 ... 00:20:55.396 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:55.396 ... 00:20:55.396 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:55.396 ... 00:20:55.396 fio-3.35 00:20:55.396 Starting 24 threads 00:21:07.668 00:21:07.668 filename0: (groupid=0, jobs=1): err= 0: pid=83482: Mon Jun 10 08:16:27 2024 00:21:07.668 read: IOPS=190, BW=764KiB/s (782kB/s)(7684KiB/10063msec) 00:21:07.668 slat (usec): min=4, max=1037, avg=18.83, stdev=25.91 00:21:07.668 clat (msec): min=4, max=171, avg=83.61, stdev=27.61 00:21:07.668 lat (msec): min=4, max=171, avg=83.63, stdev=27.61 00:21:07.668 clat percentiles (msec): 00:21:07.668 | 1.00th=[ 7], 5.00th=[ 45], 10.00th=[ 55], 20.00th=[ 69], 00:21:07.668 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 80], 60.00th=[ 85], 00:21:07.668 | 70.00th=[ 93], 80.00th=[ 105], 90.00th=[ 120], 95.00th=[ 133], 00:21:07.668 | 99.00th=[ 167], 99.50th=[ 169], 99.90th=[ 171], 99.95th=[ 171], 00:21:07.668 | 99.99th=[ 171] 00:21:07.668 bw ( KiB/s): min= 512, max= 1408, per=3.86%, avg=761.90, stdev=193.15, samples=20 00:21:07.668 iops : min= 128, max= 352, avg=190.45, stdev=48.31, samples=20 00:21:07.668 lat (msec) : 10=2.50%, 20=0.73%, 50=3.18%, 100=71.37%, 250=22.23% 00:21:07.668 cpu : usr=39.77%, sys=1.67%, ctx=1545, majf=0, minf=9 00:21:07.668 IO depths : 1=0.2%, 2=4.4%, 4=17.0%, 8=64.5%, 16=14.0%, 32=0.0%, >=64=0.0% 00:21:07.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.668 complete : 0=0.0%, 4=92.4%, 8=3.9%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.668 issued rwts: total=1921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.668 filename0: (groupid=0, jobs=1): err= 0: pid=83483: Mon Jun 10 08:16:27 2024 00:21:07.668 read: IOPS=207, BW=828KiB/s (848kB/s)(8328KiB/10052msec) 00:21:07.668 slat (usec): min=7, max=8045, avg=29.86, stdev=304.42 00:21:07.668 clat (msec): min=20, max=167, avg=77.02, stdev=23.46 00:21:07.668 lat (msec): min=20, max=167, avg=77.05, stdev=23.45 00:21:07.668 clat percentiles (msec): 00:21:07.668 | 1.00th=[ 24], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 59], 00:21:07.668 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 82], 00:21:07.668 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 111], 95.00th=[ 121], 00:21:07.668 | 99.00th=[ 133], 99.50th=[ 144], 99.90th=[ 159], 99.95th=[ 167], 00:21:07.668 | 99.99th=[ 167] 00:21:07.668 bw ( KiB/s): min= 576, max= 1304, per=4.19%, avg=826.40, stdev=153.89, samples=20 00:21:07.668 iops : min= 144, max= 326, avg=206.60, stdev=38.47, samples=20 00:21:07.668 lat (msec) : 50=12.10%, 100=72.81%, 250=15.08% 00:21:07.668 cpu : usr=33.65%, sys=1.12%, ctx=922, majf=0, minf=9 00:21:07.668 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=79.9%, 16=16.1%, 32=0.0%, >=64=0.0% 00:21:07.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.669 complete : 0=0.0%, 4=88.2%, 8=11.1%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.669 issued rwts: total=2082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.669 filename0: (groupid=0, jobs=1): err= 0: pid=83484: Mon Jun 10 08:16:27 2024 00:21:07.669 read: IOPS=198, BW=796KiB/s (815kB/s)(8000KiB/10051msec) 00:21:07.669 slat (usec): min=4, max=9027, avg=37.38, stdev=318.13 00:21:07.669 clat (msec): min=15, max=160, avg=80.14, stdev=23.06 00:21:07.669 lat (msec): min=15, max=160, avg=80.17, stdev=23.06 00:21:07.669 clat percentiles (msec): 00:21:07.669 | 1.00th=[ 24], 5.00th=[ 48], 10.00th=[ 54], 20.00th=[ 64], 00:21:07.669 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 79], 60.00th=[ 82], 00:21:07.669 | 70.00th=[ 87], 80.00th=[ 97], 90.00th=[ 113], 95.00th=[ 123], 00:21:07.669 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 159], 99.95th=[ 161], 00:21:07.669 | 99.99th=[ 161] 00:21:07.669 bw ( KiB/s): min= 584, max= 1248, per=4.02%, avg=793.60, stdev=149.66, samples=20 00:21:07.669 iops : min= 146, max= 312, avg=198.40, stdev=37.41, samples=20 00:21:07.669 lat (msec) : 20=0.80%, 50=6.40%, 100=74.25%, 250=18.55% 00:21:07.669 cpu : usr=44.65%, sys=1.73%, ctx=1499, majf=0, minf=9 00:21:07.669 IO depths : 1=0.1%, 2=3.2%, 4=13.0%, 8=69.5%, 16=14.2%, 32=0.0%, >=64=0.0% 00:21:07.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.669 complete : 0=0.0%, 4=90.7%, 8=6.4%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.669 issued rwts: total=2000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.669 filename0: (groupid=0, jobs=1): err= 0: pid=83485: Mon Jun 10 08:16:27 2024 00:21:07.669 read: IOPS=197, BW=790KiB/s (809kB/s)(7948KiB/10059msec) 00:21:07.669 slat (usec): min=4, max=8046, avg=25.58, stdev=254.58 00:21:07.669 clat (msec): min=3, max=170, avg=80.80, stdev=24.79 00:21:07.669 lat (msec): min=3, max=170, avg=80.82, stdev=24.79 00:21:07.669 clat percentiles (msec): 00:21:07.669 | 1.00th=[ 8], 5.00th=[ 46], 10.00th=[ 53], 20.00th=[ 63], 00:21:07.669 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 84], 00:21:07.669 | 70.00th=[ 87], 80.00th=[ 101], 90.00th=[ 116], 95.00th=[ 126], 00:21:07.669 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 167], 99.95th=[ 171], 00:21:07.669 | 99.99th=[ 171] 00:21:07.669 bw ( KiB/s): min= 560, max= 1280, per=4.00%, avg=788.40, stdev=173.60, samples=20 00:21:07.669 iops : min= 140, max= 320, avg=197.10, stdev=43.40, samples=20 00:21:07.669 lat (msec) : 4=0.10%, 10=1.51%, 20=0.70%, 50=4.53%, 100=72.87% 00:21:07.669 lat (msec) : 250=20.28% 00:21:07.669 cpu : usr=38.47%, sys=1.64%, ctx=1207, majf=0, minf=9 00:21:07.669 IO depths : 1=0.1%, 2=3.4%, 4=13.4%, 8=68.7%, 16=14.4%, 32=0.0%, >=64=0.0% 00:21:07.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.669 complete : 0=0.0%, 4=91.1%, 8=6.0%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.669 issued rwts: total=1987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.669 filename0: (groupid=0, jobs=1): err= 0: pid=83486: Mon Jun 10 08:16:27 2024 00:21:07.669 read: IOPS=210, BW=840KiB/s (860kB/s)(8420KiB/10022msec) 00:21:07.669 slat (usec): min=4, max=8046, avg=61.67, stdev=551.53 00:21:07.669 clat (msec): min=21, max=143, avg=75.87, stdev=22.33 00:21:07.669 lat (msec): min=21, max=143, avg=75.93, stdev=22.35 00:21:07.669 clat percentiles (msec): 00:21:07.669 | 1.00th=[ 27], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 60], 00:21:07.669 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 83], 00:21:07.669 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 121], 00:21:07.669 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 142], 99.95th=[ 144], 00:21:07.669 | 99.99th=[ 144] 00:21:07.669 bw ( KiB/s): min= 608, max= 1272, per=4.24%, avg=835.65, stdev=143.88, samples=20 00:21:07.669 iops : min= 152, max= 318, avg=208.80, stdev=36.00, samples=20 00:21:07.669 lat (msec) : 50=14.44%, 100=74.01%, 250=11.54% 00:21:07.669 cpu : usr=31.97%, sys=1.18%, ctx=843, majf=0, minf=9 00:21:07.669 IO depths : 1=0.1%, 2=1.1%, 4=4.5%, 8=79.0%, 16=15.4%, 32=0.0%, >=64=0.0% 00:21:07.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.669 complete : 0=0.0%, 4=88.1%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.669 issued rwts: total=2105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.669 filename0: (groupid=0, jobs=1): err= 0: pid=83487: Mon Jun 10 08:16:27 2024 00:21:07.669 read: IOPS=214, BW=858KiB/s (878kB/s)(8620KiB/10051msec) 00:21:07.669 slat (usec): min=7, max=9030, avg=33.64, stdev=356.30 00:21:07.669 clat (msec): min=15, max=168, avg=74.43, stdev=22.97 00:21:07.669 lat (msec): min=15, max=168, avg=74.46, stdev=22.97 00:21:07.669 clat percentiles (msec): 00:21:07.669 | 1.00th=[ 34], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 57], 00:21:07.669 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 80], 00:21:07.669 | 70.00th=[ 84], 80.00th=[ 86], 90.00th=[ 108], 95.00th=[ 121], 00:21:07.669 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 144], 99.95th=[ 157], 00:21:07.669 | 99.99th=[ 169] 00:21:07.669 bw ( KiB/s): min= 584, max= 1216, per=4.34%, avg=855.60, stdev=149.83, samples=20 00:21:07.669 iops : min= 146, max= 304, avg=213.90, stdev=37.46, samples=20 00:21:07.669 lat (msec) : 20=0.74%, 50=16.01%, 100=71.09%, 250=12.16% 00:21:07.669 cpu : usr=32.98%, sys=1.15%, ctx=901, majf=0, minf=9 00:21:07.669 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:21:07.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.669 complete : 0=0.0%, 4=87.6%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.669 issued rwts: total=2155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.669 filename0: (groupid=0, jobs=1): err= 0: pid=83488: Mon Jun 10 08:16:27 2024 00:21:07.669 read: IOPS=214, BW=857KiB/s (877kB/s)(8572KiB/10008msec) 00:21:07.669 slat (usec): min=4, max=8048, avg=31.29, stdev=300.23 00:21:07.669 clat (msec): min=23, max=152, avg=74.55, stdev=22.84 00:21:07.669 lat (msec): min=23, max=152, avg=74.58, stdev=22.83 00:21:07.669 clat percentiles (msec): 00:21:07.669 | 1.00th=[ 24], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:21:07.669 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 82], 00:21:07.669 | 70.00th=[ 84], 80.00th=[ 86], 90.00th=[ 108], 95.00th=[ 121], 00:21:07.669 | 99.00th=[ 132], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 153], 00:21:07.669 | 99.99th=[ 153] 00:21:07.669 bw ( KiB/s): min= 616, max= 1360, per=4.33%, avg=853.00, stdev=147.70, samples=20 00:21:07.669 iops : min= 154, max= 340, avg=213.20, stdev=36.96, samples=20 00:21:07.669 lat (msec) : 50=16.05%, 100=71.96%, 250=11.99% 00:21:07.669 cpu : usr=34.05%, sys=1.36%, ctx=944, majf=0, minf=9 00:21:07.669 IO depths : 1=0.1%, 2=1.1%, 4=4.2%, 8=79.4%, 16=15.2%, 32=0.0%, >=64=0.0% 00:21:07.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.669 complete : 0=0.0%, 4=87.9%, 8=11.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.669 issued rwts: total=2143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.669 filename0: (groupid=0, jobs=1): err= 0: pid=83489: Mon Jun 10 08:16:27 2024 00:21:07.669 read: IOPS=214, BW=856KiB/s (877kB/s)(8596KiB/10042msec) 00:21:07.669 slat (usec): min=5, max=7646, avg=37.29, stdev=314.99 00:21:07.669 clat (msec): min=21, max=161, avg=74.53, stdev=22.75 00:21:07.669 lat (msec): min=21, max=161, avg=74.57, stdev=22.76 00:21:07.669 clat percentiles (msec): 00:21:07.669 | 1.00th=[ 26], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 56], 00:21:07.669 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 74], 60.00th=[ 80], 00:21:07.669 | 70.00th=[ 83], 80.00th=[ 88], 90.00th=[ 108], 95.00th=[ 122], 00:21:07.669 | 99.00th=[ 134], 99.50th=[ 138], 99.90th=[ 144], 99.95th=[ 144], 00:21:07.669 | 99.99th=[ 161] 00:21:07.669 bw ( KiB/s): min= 560, max= 1280, per=4.33%, avg=853.20, stdev=141.55, samples=20 00:21:07.669 iops : min= 140, max= 320, avg=213.30, stdev=35.39, samples=20 00:21:07.669 lat (msec) : 50=12.24%, 100=74.22%, 250=13.54% 00:21:07.669 cpu : usr=44.09%, sys=1.71%, ctx=1641, majf=0, minf=9 00:21:07.669 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=79.2%, 16=15.2%, 32=0.0%, >=64=0.0% 00:21:07.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.669 complete : 0=0.0%, 4=88.0%, 8=11.0%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.669 issued rwts: total=2149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.669 filename1: (groupid=0, jobs=1): err= 0: pid=83490: Mon Jun 10 08:16:27 2024 00:21:07.669 read: IOPS=216, BW=868KiB/s (889kB/s)(8716KiB/10044msec) 00:21:07.669 slat (usec): min=4, max=8037, avg=31.16, stdev=304.14 00:21:07.669 clat (msec): min=21, max=139, avg=73.55, stdev=22.61 00:21:07.669 lat (msec): min=21, max=139, avg=73.59, stdev=22.61 00:21:07.669 clat percentiles (msec): 00:21:07.669 | 1.00th=[ 26], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 56], 00:21:07.669 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 79], 00:21:07.669 | 70.00th=[ 84], 80.00th=[ 85], 90.00th=[ 107], 95.00th=[ 121], 00:21:07.669 | 99.00th=[ 133], 99.50th=[ 134], 99.90th=[ 140], 99.95th=[ 140], 00:21:07.669 | 99.99th=[ 140] 00:21:07.669 bw ( KiB/s): min= 616, max= 1256, per=4.39%, avg=865.25, stdev=153.50, samples=20 00:21:07.669 iops : min= 154, max= 314, avg=216.30, stdev=38.39, samples=20 00:21:07.669 lat (msec) : 50=16.80%, 100=71.68%, 250=11.52% 00:21:07.669 cpu : usr=34.87%, sys=1.25%, ctx=983, majf=0, minf=9 00:21:07.669 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.3%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:07.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.669 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.669 issued rwts: total=2179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.669 filename1: (groupid=0, jobs=1): err= 0: pid=83491: Mon Jun 10 08:16:27 2024 00:21:07.669 read: IOPS=205, BW=823KiB/s (843kB/s)(8264KiB/10041msec) 00:21:07.669 slat (usec): min=4, max=8029, avg=32.71, stdev=318.00 00:21:07.669 clat (msec): min=27, max=158, avg=77.52, stdev=22.68 00:21:07.669 lat (msec): min=27, max=158, avg=77.56, stdev=22.69 00:21:07.669 clat percentiles (msec): 00:21:07.669 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 49], 20.00th=[ 59], 00:21:07.669 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 81], 00:21:07.669 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 111], 95.00th=[ 121], 00:21:07.669 | 99.00th=[ 132], 99.50th=[ 142], 99.90th=[ 157], 99.95th=[ 157], 00:21:07.669 | 99.99th=[ 159] 00:21:07.670 bw ( KiB/s): min= 536, max= 1192, per=4.16%, avg=820.00, stdev=139.09, samples=20 00:21:07.670 iops : min= 134, max= 298, avg=205.00, stdev=34.77, samples=20 00:21:07.670 lat (msec) : 50=11.42%, 100=72.89%, 250=15.68% 00:21:07.670 cpu : usr=37.10%, sys=1.53%, ctx=1145, majf=0, minf=9 00:21:07.670 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.2%, 16=16.2%, 32=0.0%, >=64=0.0% 00:21:07.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.670 complete : 0=0.0%, 4=88.2%, 8=11.2%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.670 issued rwts: total=2066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.670 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.670 filename1: (groupid=0, jobs=1): err= 0: pid=83492: Mon Jun 10 08:16:27 2024 00:21:07.670 read: IOPS=220, BW=882KiB/s (903kB/s)(8828KiB/10011msec) 00:21:07.670 slat (usec): min=5, max=8051, avg=32.52, stdev=280.55 00:21:07.670 clat (msec): min=18, max=137, avg=72.44, stdev=23.06 00:21:07.670 lat (msec): min=18, max=137, avg=72.47, stdev=23.07 00:21:07.670 clat percentiles (msec): 00:21:07.670 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 53], 00:21:07.670 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 78], 00:21:07.670 | 70.00th=[ 83], 80.00th=[ 86], 90.00th=[ 106], 95.00th=[ 121], 00:21:07.670 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 138], 99.95th=[ 138], 00:21:07.670 | 99.99th=[ 138] 00:21:07.670 bw ( KiB/s): min= 608, max= 1304, per=4.46%, avg=878.40, stdev=145.79, samples=20 00:21:07.670 iops : min= 152, max= 326, avg=219.55, stdev=36.49, samples=20 00:21:07.670 lat (msec) : 20=0.14%, 50=17.08%, 100=71.36%, 250=11.42% 00:21:07.670 cpu : usr=39.74%, sys=1.34%, ctx=1392, majf=0, minf=9 00:21:07.670 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:21:07.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.670 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.670 issued rwts: total=2207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.670 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.670 filename1: (groupid=0, jobs=1): err= 0: pid=83493: Mon Jun 10 08:16:27 2024 00:21:07.670 read: IOPS=196, BW=786KiB/s (805kB/s)(7896KiB/10047msec) 00:21:07.670 slat (usec): min=4, max=6023, avg=29.95, stdev=225.91 00:21:07.670 clat (msec): min=26, max=156, avg=81.23, stdev=22.75 00:21:07.670 lat (msec): min=26, max=156, avg=81.26, stdev=22.75 00:21:07.670 clat percentiles (msec): 00:21:07.670 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 66], 00:21:07.670 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 83], 00:21:07.670 | 70.00th=[ 86], 80.00th=[ 97], 90.00th=[ 120], 95.00th=[ 127], 00:21:07.670 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 157], 00:21:07.670 | 99.99th=[ 157] 00:21:07.670 bw ( KiB/s): min= 560, max= 1152, per=3.97%, avg=782.80, stdev=141.30, samples=20 00:21:07.670 iops : min= 140, max= 288, avg=195.65, stdev=35.35, samples=20 00:21:07.670 lat (msec) : 50=6.89%, 100=74.42%, 250=18.69% 00:21:07.670 cpu : usr=41.63%, sys=1.67%, ctx=1243, majf=0, minf=9 00:21:07.670 IO depths : 1=0.1%, 2=2.4%, 4=9.8%, 8=72.6%, 16=15.0%, 32=0.0%, >=64=0.0% 00:21:07.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.670 complete : 0=0.0%, 4=90.1%, 8=7.7%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.670 issued rwts: total=1974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.670 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.670 filename1: (groupid=0, jobs=1): err= 0: pid=83494: Mon Jun 10 08:16:27 2024 00:21:07.670 read: IOPS=214, BW=858KiB/s (878kB/s)(8632KiB/10062msec) 00:21:07.670 slat (usec): min=3, max=8026, avg=22.16, stdev=191.38 00:21:07.670 clat (msec): min=4, max=156, avg=74.42, stdev=24.96 00:21:07.670 lat (msec): min=4, max=156, avg=74.44, stdev=24.95 00:21:07.670 clat percentiles (msec): 00:21:07.670 | 1.00th=[ 13], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 56], 00:21:07.670 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 81], 00:21:07.670 | 70.00th=[ 84], 80.00th=[ 89], 90.00th=[ 109], 95.00th=[ 123], 00:21:07.670 | 99.00th=[ 133], 99.50th=[ 138], 99.90th=[ 155], 99.95th=[ 157], 00:21:07.670 | 99.99th=[ 157] 00:21:07.670 bw ( KiB/s): min= 560, max= 1272, per=4.35%, avg=857.40, stdev=162.81, samples=20 00:21:07.670 iops : min= 140, max= 318, avg=214.35, stdev=40.70, samples=20 00:21:07.670 lat (msec) : 10=0.74%, 20=1.39%, 50=14.64%, 100=68.67%, 250=14.55% 00:21:07.670 cpu : usr=34.49%, sys=1.29%, ctx=1047, majf=0, minf=9 00:21:07.670 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=80.4%, 16=16.1%, 32=0.0%, >=64=0.0% 00:21:07.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.670 complete : 0=0.0%, 4=88.2%, 8=11.2%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.670 issued rwts: total=2158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.670 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.670 filename1: (groupid=0, jobs=1): err= 0: pid=83495: Mon Jun 10 08:16:27 2024 00:21:07.670 read: IOPS=200, BW=801KiB/s (820kB/s)(8048KiB/10051msec) 00:21:07.670 slat (usec): min=7, max=8040, avg=32.26, stdev=287.94 00:21:07.670 clat (msec): min=23, max=143, avg=79.63, stdev=21.31 00:21:07.670 lat (msec): min=23, max=143, avg=79.66, stdev=21.31 00:21:07.670 clat percentiles (msec): 00:21:07.670 | 1.00th=[ 35], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 63], 00:21:07.670 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 83], 00:21:07.670 | 70.00th=[ 86], 80.00th=[ 95], 90.00th=[ 109], 95.00th=[ 121], 00:21:07.670 | 99.00th=[ 133], 99.50th=[ 140], 99.90th=[ 142], 99.95th=[ 142], 00:21:07.670 | 99.99th=[ 144] 00:21:07.670 bw ( KiB/s): min= 616, max= 1272, per=4.05%, avg=798.40, stdev=143.19, samples=20 00:21:07.670 iops : min= 154, max= 318, avg=199.60, stdev=35.80, samples=20 00:21:07.670 lat (msec) : 50=6.61%, 100=77.44%, 250=15.95% 00:21:07.670 cpu : usr=40.43%, sys=1.45%, ctx=1388, majf=0, minf=9 00:21:07.670 IO depths : 1=0.1%, 2=2.4%, 4=9.8%, 8=72.8%, 16=14.9%, 32=0.0%, >=64=0.0% 00:21:07.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.670 complete : 0=0.0%, 4=90.0%, 8=7.9%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.670 issued rwts: total=2012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.670 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.670 filename1: (groupid=0, jobs=1): err= 0: pid=83496: Mon Jun 10 08:16:27 2024 00:21:07.670 read: IOPS=221, BW=885KiB/s (906kB/s)(8876KiB/10029msec) 00:21:07.670 slat (usec): min=5, max=5028, avg=23.34, stdev=136.65 00:21:07.670 clat (msec): min=18, max=147, avg=72.15, stdev=23.30 00:21:07.670 lat (msec): min=18, max=147, avg=72.17, stdev=23.30 00:21:07.670 clat percentiles (msec): 00:21:07.670 | 1.00th=[ 24], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 52], 00:21:07.670 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 77], 00:21:07.670 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 107], 95.00th=[ 122], 00:21:07.670 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 138], 99.95th=[ 148], 00:21:07.670 | 99.99th=[ 148] 00:21:07.670 bw ( KiB/s): min= 560, max= 1328, per=4.48%, avg=883.50, stdev=153.74, samples=20 00:21:07.670 iops : min= 140, max= 332, avg=220.85, stdev=38.46, samples=20 00:21:07.670 lat (msec) : 20=0.14%, 50=17.85%, 100=70.71%, 250=11.31% 00:21:07.670 cpu : usr=38.52%, sys=1.33%, ctx=1117, majf=0, minf=9 00:21:07.670 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.5%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:07.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.670 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.670 issued rwts: total=2219,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.670 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.670 filename1: (groupid=0, jobs=1): err= 0: pid=83497: Mon Jun 10 08:16:27 2024 00:21:07.670 read: IOPS=215, BW=860KiB/s (881kB/s)(8640KiB/10042msec) 00:21:07.670 slat (usec): min=5, max=8038, avg=32.52, stdev=310.88 00:21:07.670 clat (msec): min=23, max=144, avg=74.20, stdev=22.79 00:21:07.670 lat (msec): min=23, max=144, avg=74.24, stdev=22.80 00:21:07.670 clat percentiles (msec): 00:21:07.670 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 56], 00:21:07.670 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 81], 00:21:07.670 | 70.00th=[ 84], 80.00th=[ 86], 90.00th=[ 108], 95.00th=[ 121], 00:21:07.670 | 99.00th=[ 132], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:21:07.670 | 99.99th=[ 144] 00:21:07.670 bw ( KiB/s): min= 608, max= 1192, per=4.35%, avg=857.70, stdev=138.57, samples=20 00:21:07.670 iops : min= 152, max= 298, avg=214.40, stdev=34.65, samples=20 00:21:07.670 lat (msec) : 50=15.56%, 100=71.57%, 250=12.87% 00:21:07.670 cpu : usr=35.17%, sys=1.18%, ctx=918, majf=0, minf=0 00:21:07.670 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.0%, 16=16.3%, 32=0.0%, >=64=0.0% 00:21:07.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.670 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.670 issued rwts: total=2160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.670 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.670 filename2: (groupid=0, jobs=1): err= 0: pid=83498: Mon Jun 10 08:16:27 2024 00:21:07.670 read: IOPS=210, BW=843KiB/s (863kB/s)(8464KiB/10042msec) 00:21:07.670 slat (usec): min=4, max=8027, avg=27.10, stdev=213.51 00:21:07.670 clat (msec): min=22, max=155, avg=75.70, stdev=22.45 00:21:07.670 lat (msec): min=22, max=155, avg=75.73, stdev=22.46 00:21:07.670 clat percentiles (msec): 00:21:07.670 | 1.00th=[ 26], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 57], 00:21:07.670 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 81], 00:21:07.670 | 70.00th=[ 84], 80.00th=[ 90], 90.00th=[ 108], 95.00th=[ 121], 00:21:07.670 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 138], 99.95th=[ 148], 00:21:07.670 | 99.99th=[ 157] 00:21:07.670 bw ( KiB/s): min= 616, max= 1280, per=4.26%, avg=840.05, stdev=143.26, samples=20 00:21:07.670 iops : min= 154, max= 320, avg=210.00, stdev=35.83, samples=20 00:21:07.670 lat (msec) : 50=13.47%, 100=72.54%, 250=13.99% 00:21:07.670 cpu : usr=39.38%, sys=1.59%, ctx=1106, majf=0, minf=9 00:21:07.670 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=78.9%, 16=15.2%, 32=0.0%, >=64=0.0% 00:21:07.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.670 complete : 0=0.0%, 4=88.1%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.670 issued rwts: total=2116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.670 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.670 filename2: (groupid=0, jobs=1): err= 0: pid=83499: Mon Jun 10 08:16:27 2024 00:21:07.670 read: IOPS=211, BW=846KiB/s (866kB/s)(8468KiB/10013msec) 00:21:07.670 slat (usec): min=4, max=8040, avg=27.63, stdev=261.40 00:21:07.670 clat (msec): min=15, max=142, avg=75.52, stdev=22.38 00:21:07.671 lat (msec): min=15, max=142, avg=75.55, stdev=22.38 00:21:07.671 clat percentiles (msec): 00:21:07.671 | 1.00th=[ 27], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 57], 00:21:07.671 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 81], 00:21:07.671 | 70.00th=[ 84], 80.00th=[ 87], 90.00th=[ 108], 95.00th=[ 121], 00:21:07.671 | 99.00th=[ 133], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 144], 00:21:07.671 | 99.99th=[ 144] 00:21:07.671 bw ( KiB/s): min= 608, max= 1200, per=4.27%, avg=842.20, stdev=136.08, samples=20 00:21:07.671 iops : min= 152, max= 300, avg=210.50, stdev=34.04, samples=20 00:21:07.671 lat (msec) : 20=0.33%, 50=13.93%, 100=73.50%, 250=12.23% 00:21:07.671 cpu : usr=35.94%, sys=1.22%, ctx=1005, majf=0, minf=9 00:21:07.671 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=77.8%, 16=15.1%, 32=0.0%, >=64=0.0% 00:21:07.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.671 complete : 0=0.0%, 4=88.4%, 8=10.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.671 issued rwts: total=2117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.671 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.671 filename2: (groupid=0, jobs=1): err= 0: pid=83500: Mon Jun 10 08:16:27 2024 00:21:07.671 read: IOPS=197, BW=792KiB/s (811kB/s)(7948KiB/10036msec) 00:21:07.671 slat (usec): min=5, max=8040, avg=45.11, stdev=411.46 00:21:07.671 clat (msec): min=23, max=158, avg=80.50, stdev=21.97 00:21:07.671 lat (msec): min=23, max=158, avg=80.55, stdev=21.97 00:21:07.671 clat percentiles (msec): 00:21:07.671 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 61], 00:21:07.671 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 82], 60.00th=[ 84], 00:21:07.671 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 110], 95.00th=[ 121], 00:21:07.671 | 99.00th=[ 132], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 159], 00:21:07.671 | 99.99th=[ 159] 00:21:07.671 bw ( KiB/s): min= 608, max= 1192, per=4.00%, avg=788.40, stdev=132.64, samples=20 00:21:07.671 iops : min= 152, max= 298, avg=197.10, stdev=33.16, samples=20 00:21:07.671 lat (msec) : 50=8.35%, 100=74.84%, 250=16.81% 00:21:07.671 cpu : usr=31.91%, sys=1.22%, ctx=848, majf=0, minf=9 00:21:07.671 IO depths : 1=0.1%, 2=2.1%, 4=8.2%, 8=74.3%, 16=15.3%, 32=0.0%, >=64=0.0% 00:21:07.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.671 complete : 0=0.0%, 4=89.7%, 8=8.5%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.671 issued rwts: total=1987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.671 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.671 filename2: (groupid=0, jobs=1): err= 0: pid=83501: Mon Jun 10 08:16:27 2024 00:21:07.671 read: IOPS=175, BW=703KiB/s (720kB/s)(7040KiB/10010msec) 00:21:07.671 slat (usec): min=3, max=8036, avg=37.72, stdev=350.93 00:21:07.671 clat (msec): min=18, max=183, avg=90.73, stdev=26.36 00:21:07.671 lat (msec): min=18, max=183, avg=90.76, stdev=26.37 00:21:07.671 clat percentiles (msec): 00:21:07.671 | 1.00th=[ 35], 5.00th=[ 58], 10.00th=[ 66], 20.00th=[ 73], 00:21:07.671 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 90], 00:21:07.671 | 70.00th=[ 102], 80.00th=[ 110], 90.00th=[ 127], 95.00th=[ 142], 00:21:07.671 | 99.00th=[ 169], 99.50th=[ 169], 99.90th=[ 184], 99.95th=[ 184], 00:21:07.671 | 99.99th=[ 184] 00:21:07.671 bw ( KiB/s): min= 400, max= 1136, per=3.57%, avg=703.00, stdev=159.19, samples=20 00:21:07.671 iops : min= 100, max= 284, avg=175.70, stdev=39.84, samples=20 00:21:07.671 lat (msec) : 20=0.11%, 50=4.43%, 100=64.09%, 250=31.36% 00:21:07.671 cpu : usr=43.65%, sys=1.63%, ctx=1341, majf=0, minf=9 00:21:07.671 IO depths : 1=0.1%, 2=6.2%, 4=24.8%, 8=56.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:21:07.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.671 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.671 issued rwts: total=1760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.671 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.671 filename2: (groupid=0, jobs=1): err= 0: pid=83502: Mon Jun 10 08:16:27 2024 00:21:07.671 read: IOPS=193, BW=775KiB/s (793kB/s)(7780KiB/10045msec) 00:21:07.671 slat (usec): min=5, max=8043, avg=34.18, stdev=314.94 00:21:07.671 clat (msec): min=35, max=156, avg=82.35, stdev=20.99 00:21:07.671 lat (msec): min=35, max=156, avg=82.38, stdev=21.01 00:21:07.671 clat percentiles (msec): 00:21:07.671 | 1.00th=[ 48], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 64], 00:21:07.671 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 84], 00:21:07.671 | 70.00th=[ 93], 80.00th=[ 97], 90.00th=[ 110], 95.00th=[ 122], 00:21:07.671 | 99.00th=[ 133], 99.50th=[ 140], 99.90th=[ 148], 99.95th=[ 157], 00:21:07.671 | 99.99th=[ 157] 00:21:07.671 bw ( KiB/s): min= 616, max= 1024, per=3.91%, avg=771.50, stdev=97.91, samples=20 00:21:07.671 iops : min= 154, max= 256, avg=192.85, stdev=24.49, samples=20 00:21:07.671 lat (msec) : 50=6.32%, 100=77.22%, 250=16.45% 00:21:07.671 cpu : usr=31.83%, sys=1.33%, ctx=850, majf=0, minf=9 00:21:07.671 IO depths : 1=0.1%, 2=3.2%, 4=13.1%, 8=69.4%, 16=14.2%, 32=0.0%, >=64=0.0% 00:21:07.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.671 complete : 0=0.0%, 4=90.7%, 8=6.4%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.671 issued rwts: total=1945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.671 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.671 filename2: (groupid=0, jobs=1): err= 0: pid=83503: Mon Jun 10 08:16:27 2024 00:21:07.671 read: IOPS=191, BW=767KiB/s (785kB/s)(7676KiB/10008msec) 00:21:07.671 slat (usec): min=3, max=8049, avg=44.83, stdev=414.49 00:21:07.671 clat (msec): min=18, max=181, avg=83.21, stdev=23.59 00:21:07.671 lat (msec): min=18, max=181, avg=83.25, stdev=23.58 00:21:07.671 clat percentiles (msec): 00:21:07.671 | 1.00th=[ 35], 5.00th=[ 49], 10.00th=[ 56], 20.00th=[ 69], 00:21:07.671 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 84], 00:21:07.671 | 70.00th=[ 88], 80.00th=[ 101], 90.00th=[ 118], 95.00th=[ 130], 00:21:07.671 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 182], 99.95th=[ 182], 00:21:07.671 | 99.99th=[ 182] 00:21:07.671 bw ( KiB/s): min= 512, max= 1024, per=3.87%, avg=763.55, stdev=116.49, samples=20 00:21:07.671 iops : min= 128, max= 256, avg=190.85, stdev=29.13, samples=20 00:21:07.671 lat (msec) : 20=0.36%, 50=5.47%, 100=74.21%, 250=19.96% 00:21:07.671 cpu : usr=39.28%, sys=1.43%, ctx=1157, majf=0, minf=9 00:21:07.671 IO depths : 1=0.1%, 2=3.8%, 4=15.2%, 8=67.3%, 16=13.7%, 32=0.0%, >=64=0.0% 00:21:07.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.671 complete : 0=0.0%, 4=91.3%, 8=5.4%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.671 issued rwts: total=1919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.671 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.671 filename2: (groupid=0, jobs=1): err= 0: pid=83504: Mon Jun 10 08:16:27 2024 00:21:07.671 read: IOPS=215, BW=860KiB/s (881kB/s)(8608KiB/10005msec) 00:21:07.671 slat (usec): min=4, max=8049, avg=40.13, stdev=361.68 00:21:07.671 clat (msec): min=17, max=152, avg=74.19, stdev=23.17 00:21:07.671 lat (msec): min=18, max=152, avg=74.23, stdev=23.16 00:21:07.671 clat percentiles (msec): 00:21:07.671 | 1.00th=[ 24], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 55], 00:21:07.671 | 30.00th=[ 60], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 79], 00:21:07.671 | 70.00th=[ 83], 80.00th=[ 88], 90.00th=[ 109], 95.00th=[ 123], 00:21:07.671 | 99.00th=[ 133], 99.50th=[ 134], 99.90th=[ 138], 99.95th=[ 153], 00:21:07.671 | 99.99th=[ 153] 00:21:07.671 bw ( KiB/s): min= 614, max= 1304, per=4.35%, avg=858.84, stdev=152.04, samples=19 00:21:07.671 iops : min= 153, max= 326, avg=214.68, stdev=38.06, samples=19 00:21:07.671 lat (msec) : 20=0.33%, 50=14.31%, 100=72.77%, 250=12.59% 00:21:07.671 cpu : usr=40.63%, sys=1.59%, ctx=1249, majf=0, minf=9 00:21:07.671 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=80.3%, 16=15.2%, 32=0.0%, >=64=0.0% 00:21:07.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.671 complete : 0=0.0%, 4=87.6%, 8=11.6%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.671 issued rwts: total=2152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.671 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.671 filename2: (groupid=0, jobs=1): err= 0: pid=83505: Mon Jun 10 08:16:27 2024 00:21:07.671 read: IOPS=205, BW=820KiB/s (840kB/s)(8232KiB/10035msec) 00:21:07.671 slat (usec): min=4, max=8033, avg=35.75, stdev=353.06 00:21:07.671 clat (msec): min=21, max=142, avg=77.82, stdev=22.56 00:21:07.671 lat (msec): min=21, max=142, avg=77.86, stdev=22.56 00:21:07.671 clat percentiles (msec): 00:21:07.671 | 1.00th=[ 27], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 61], 00:21:07.671 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 83], 00:21:07.671 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 114], 95.00th=[ 122], 00:21:07.671 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 142], 99.95th=[ 142], 00:21:07.671 | 99.99th=[ 142] 00:21:07.671 bw ( KiB/s): min= 560, max= 1224, per=4.14%, avg=816.80, stdev=139.24, samples=20 00:21:07.671 iops : min= 140, max= 306, avg=204.20, stdev=34.81, samples=20 00:21:07.671 lat (msec) : 50=10.98%, 100=74.78%, 250=14.24% 00:21:07.671 cpu : usr=34.42%, sys=0.92%, ctx=931, majf=0, minf=9 00:21:07.671 IO depths : 1=0.1%, 2=2.1%, 4=8.3%, 8=74.7%, 16=14.8%, 32=0.0%, >=64=0.0% 00:21:07.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.671 complete : 0=0.0%, 4=89.3%, 8=8.9%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.671 issued rwts: total=2058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.671 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:07.671 00:21:07.671 Run status group 0 (all jobs): 00:21:07.671 READ: bw=19.2MiB/s (20.2MB/s), 703KiB/s-885KiB/s (720kB/s-906kB/s), io=194MiB (203MB), run=10005-10063msec 00:21:07.671 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:21:07.671 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:07.671 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:07.671 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:07.671 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:07.671 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:07.671 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.671 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.671 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.671 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:07.671 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.671 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.672 bdev_null0 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.672 [2024-06-10 08:16:27.816491] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.672 bdev_null1 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:07.672 { 00:21:07.672 "params": { 00:21:07.672 "name": "Nvme$subsystem", 00:21:07.672 "trtype": "$TEST_TRANSPORT", 00:21:07.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.672 "adrfam": "ipv4", 00:21:07.672 "trsvcid": "$NVMF_PORT", 00:21:07.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.672 "hdgst": ${hdgst:-false}, 00:21:07.672 "ddgst": ${ddgst:-false} 00:21:07.672 }, 00:21:07.672 "method": "bdev_nvme_attach_controller" 00:21:07.672 } 00:21:07.672 EOF 00:21:07.672 )") 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:07.672 { 00:21:07.672 "params": { 00:21:07.672 "name": "Nvme$subsystem", 00:21:07.672 "trtype": "$TEST_TRANSPORT", 00:21:07.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.672 "adrfam": "ipv4", 00:21:07.672 "trsvcid": "$NVMF_PORT", 00:21:07.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.672 "hdgst": ${hdgst:-false}, 00:21:07.672 "ddgst": ${ddgst:-false} 00:21:07.672 }, 00:21:07.672 "method": "bdev_nvme_attach_controller" 00:21:07.672 } 00:21:07.672 EOF 00:21:07.672 )") 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:21:07.672 08:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:07.672 "params": { 00:21:07.672 "name": "Nvme0", 00:21:07.672 "trtype": "tcp", 00:21:07.672 "traddr": "10.0.0.2", 00:21:07.673 "adrfam": "ipv4", 00:21:07.673 "trsvcid": "4420", 00:21:07.673 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:07.673 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:07.673 "hdgst": false, 00:21:07.673 "ddgst": false 00:21:07.673 }, 00:21:07.673 "method": "bdev_nvme_attach_controller" 00:21:07.673 },{ 00:21:07.673 "params": { 00:21:07.673 "name": "Nvme1", 00:21:07.673 "trtype": "tcp", 00:21:07.673 "traddr": "10.0.0.2", 00:21:07.673 "adrfam": "ipv4", 00:21:07.673 "trsvcid": "4420", 00:21:07.673 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.673 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.673 "hdgst": false, 00:21:07.673 "ddgst": false 00:21:07.673 }, 00:21:07.673 "method": "bdev_nvme_attach_controller" 00:21:07.673 }' 00:21:07.673 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:21:07.673 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:21:07.673 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:21:07.673 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:07.673 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:21:07.673 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:21:07.673 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:21:07.673 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:21:07.673 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:07.673 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:07.673 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:07.673 ... 00:21:07.673 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:07.673 ... 00:21:07.673 fio-3.35 00:21:07.673 Starting 4 threads 00:21:11.862 00:21:11.862 filename0: (groupid=0, jobs=1): err= 0: pid=83652: Mon Jun 10 08:16:33 2024 00:21:11.862 read: IOPS=2024, BW=15.8MiB/s (16.6MB/s)(79.1MiB/5003msec) 00:21:11.862 slat (nsec): min=7730, max=61473, avg=22756.82, stdev=9099.39 00:21:11.862 clat (usec): min=1679, max=27529, avg=3899.03, stdev=1475.26 00:21:11.862 lat (usec): min=1694, max=27564, avg=3921.78, stdev=1473.47 00:21:11.862 clat percentiles (usec): 00:21:11.862 | 1.00th=[ 2114], 5.00th=[ 2343], 10.00th=[ 2376], 20.00th=[ 2638], 00:21:11.862 | 30.00th=[ 3064], 40.00th=[ 3130], 50.00th=[ 3195], 60.00th=[ 4948], 00:21:11.862 | 70.00th=[ 5014], 80.00th=[ 5014], 90.00th=[ 5145], 95.00th=[ 5211], 00:21:11.862 | 99.00th=[ 5276], 99.50th=[11994], 99.90th=[18220], 99.95th=[18744], 00:21:11.862 | 99.99th=[18744] 00:21:11.862 bw ( KiB/s): min=13136, max=16608, per=24.98%, avg=16177.78, stdev=1141.35, samples=9 00:21:11.862 iops : min= 1642, max= 2076, avg=2022.22, stdev=142.67, samples=9 00:21:11.862 lat (msec) : 2=0.18%, 4=54.48%, 10=44.59%, 20=0.74%, 50=0.01% 00:21:11.862 cpu : usr=94.38%, sys=4.64%, ctx=8, majf=0, minf=9 00:21:11.862 IO depths : 1=0.1%, 2=0.1%, 4=63.7%, 8=36.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:11.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.862 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.862 issued rwts: total=10130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.862 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:11.862 filename0: (groupid=0, jobs=1): err= 0: pid=83653: Mon Jun 10 08:16:33 2024 00:21:11.862 read: IOPS=2020, BW=15.8MiB/s (16.6MB/s)(79.0MiB/5004msec) 00:21:11.862 slat (nsec): min=6973, max=60004, avg=19114.03, stdev=8519.59 00:21:11.862 clat (usec): min=2037, max=24331, avg=3912.43, stdev=1440.45 00:21:11.862 lat (usec): min=2048, max=24350, avg=3931.54, stdev=1438.85 00:21:11.862 clat percentiles (usec): 00:21:11.862 | 1.00th=[ 2474], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2704], 00:21:11.862 | 30.00th=[ 2737], 40.00th=[ 2966], 50.00th=[ 3195], 60.00th=[ 4883], 00:21:11.862 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5145], 95.00th=[ 5276], 00:21:11.862 | 99.00th=[ 5342], 99.50th=[11994], 99.90th=[18220], 99.95th=[18744], 00:21:11.862 | 99.99th=[18744] 00:21:11.862 bw ( KiB/s): min=13056, max=16608, per=24.93%, avg=16147.56, stdev=1160.96, samples=9 00:21:11.862 iops : min= 1632, max= 2076, avg=2018.44, stdev=145.12, samples=9 00:21:11.862 lat (msec) : 4=53.91%, 10=45.34%, 20=0.74%, 50=0.01% 00:21:11.862 cpu : usr=93.62%, sys=5.36%, ctx=7, majf=0, minf=0 00:21:11.862 IO depths : 1=0.1%, 2=0.1%, 4=63.6%, 8=36.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:11.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.863 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.863 issued rwts: total=10112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.863 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:11.863 filename1: (groupid=0, jobs=1): err= 0: pid=83654: Mon Jun 10 08:16:33 2024 00:21:11.863 read: IOPS=2026, BW=15.8MiB/s (16.6MB/s)(79.2MiB/5001msec) 00:21:11.863 slat (nsec): min=7968, max=98582, avg=22626.34, stdev=8939.21 00:21:11.863 clat (usec): min=1542, max=25738, avg=3895.89, stdev=1465.49 00:21:11.863 lat (usec): min=1558, max=25766, avg=3918.52, stdev=1463.64 00:21:11.863 clat percentiles (usec): 00:21:11.863 | 1.00th=[ 2114], 5.00th=[ 2343], 10.00th=[ 2376], 20.00th=[ 2638], 00:21:11.863 | 30.00th=[ 3064], 40.00th=[ 3130], 50.00th=[ 3195], 60.00th=[ 4948], 00:21:11.863 | 70.00th=[ 5014], 80.00th=[ 5014], 90.00th=[ 5145], 95.00th=[ 5211], 00:21:11.863 | 99.00th=[ 5276], 99.50th=[11994], 99.90th=[18220], 99.95th=[18482], 00:21:11.863 | 99.99th=[18744] 00:21:11.863 bw ( KiB/s): min=13200, max=16608, per=24.98%, avg=16177.78, stdev=1117.32, samples=9 00:21:11.863 iops : min= 1650, max= 2076, avg=2022.22, stdev=139.67, samples=9 00:21:11.863 lat (msec) : 2=0.23%, 4=54.51%, 10=44.51%, 20=0.74%, 50=0.01% 00:21:11.863 cpu : usr=94.60%, sys=4.42%, ctx=98, majf=0, minf=0 00:21:11.863 IO depths : 1=0.1%, 2=0.1%, 4=63.7%, 8=36.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:11.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.863 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.863 issued rwts: total=10137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.863 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:11.863 filename1: (groupid=0, jobs=1): err= 0: pid=83655: Mon Jun 10 08:16:33 2024 00:21:11.863 read: IOPS=2025, BW=15.8MiB/s (16.6MB/s)(79.1MiB/5002msec) 00:21:11.863 slat (nsec): min=7599, max=95280, avg=21530.83, stdev=8579.28 00:21:11.863 clat (usec): min=1490, max=26586, avg=3901.08, stdev=1468.27 00:21:11.863 lat (usec): min=1505, max=26614, avg=3922.61, stdev=1467.17 00:21:11.863 clat percentiles (usec): 00:21:11.863 | 1.00th=[ 2114], 5.00th=[ 2376], 10.00th=[ 2409], 20.00th=[ 2671], 00:21:11.863 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3195], 60.00th=[ 4948], 00:21:11.863 | 70.00th=[ 5014], 80.00th=[ 5014], 90.00th=[ 5145], 95.00th=[ 5211], 00:21:11.863 | 99.00th=[ 5342], 99.50th=[11994], 99.90th=[18220], 99.95th=[18482], 00:21:11.863 | 99.99th=[18744] 00:21:11.863 bw ( KiB/s): min=13162, max=16608, per=24.98%, avg=16177.11, stdev=1131.29, samples=9 00:21:11.863 iops : min= 1645, max= 2076, avg=2022.11, stdev=141.49, samples=9 00:21:11.863 lat (msec) : 2=0.23%, 4=54.23%, 10=44.80%, 20=0.74%, 50=0.01% 00:21:11.863 cpu : usr=94.14%, sys=4.86%, ctx=40, majf=0, minf=9 00:21:11.863 IO depths : 1=0.1%, 2=0.1%, 4=63.7%, 8=36.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:11.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.863 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.863 issued rwts: total=10130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.863 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:11.863 00:21:11.863 Run status group 0 (all jobs): 00:21:11.863 READ: bw=63.2MiB/s (66.3MB/s), 15.8MiB/s-15.8MiB/s (16.6MB/s-16.6MB/s), io=316MiB (332MB), run=5001-5004msec 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:12.121 ************************************ 00:21:12.121 END TEST fio_dif_rand_params 00:21:12.121 ************************************ 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.121 00:21:12.121 real 0m23.638s 00:21:12.121 user 2m5.274s 00:21:12.121 sys 0m6.425s 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:12.121 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:12.379 08:16:33 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:21:12.379 08:16:33 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:12.379 08:16:33 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:12.379 08:16:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:12.379 ************************************ 00:21:12.379 START TEST fio_dif_digest 00:21:12.379 ************************************ 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # fio_dif_digest 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:12.379 bdev_null0 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:12.379 [2024-06-10 08:16:34.043720] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1355 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.379 { 00:21:12.379 "params": { 00:21:12.379 "name": "Nvme$subsystem", 00:21:12.379 "trtype": "$TEST_TRANSPORT", 00:21:12.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.379 "adrfam": "ipv4", 00:21:12.379 "trsvcid": "$NVMF_PORT", 00:21:12.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.379 "hdgst": ${hdgst:-false}, 00:21:12.379 "ddgst": ${ddgst:-false} 00:21:12.379 }, 00:21:12.379 "method": "bdev_nvme_attach_controller" 00:21:12.379 } 00:21:12.379 EOF 00:21:12.379 )") 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local sanitizers 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:12.379 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # shift 00:21:12.380 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local asan_lib= 00:21:12.380 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:21:12.380 08:16:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:21:12.380 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:12.380 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:21:12.380 08:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:21:12.380 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libasan 00:21:12.380 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:21:12.380 08:16:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:21:12.380 08:16:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:21:12.380 08:16:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:12.380 "params": { 00:21:12.380 "name": "Nvme0", 00:21:12.380 "trtype": "tcp", 00:21:12.380 "traddr": "10.0.0.2", 00:21:12.380 "adrfam": "ipv4", 00:21:12.380 "trsvcid": "4420", 00:21:12.380 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:12.380 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:12.380 "hdgst": true, 00:21:12.380 "ddgst": true 00:21:12.380 }, 00:21:12.380 "method": "bdev_nvme_attach_controller" 00:21:12.380 }' 00:21:12.380 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:21:12.380 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:21:12.380 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:21:12.380 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:21:12.380 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:12.380 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:21:12.380 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:21:12.380 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:21:12.380 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:12.380 08:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:12.380 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:12.380 ... 00:21:12.380 fio-3.35 00:21:12.380 Starting 3 threads 00:21:24.574 00:21:24.574 filename0: (groupid=0, jobs=1): err= 0: pid=83761: Mon Jun 10 08:16:44 2024 00:21:24.574 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(284MiB/10001msec) 00:21:24.574 slat (usec): min=7, max=142, avg=15.71, stdev= 7.99 00:21:24.574 clat (usec): min=11852, max=16315, avg=13172.33, stdev=151.61 00:21:24.574 lat (usec): min=11861, max=16349, avg=13188.05, stdev=152.80 00:21:24.574 clat percentiles (usec): 00:21:24.574 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13173], 20.00th=[13173], 00:21:24.574 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13173], 00:21:24.574 | 70.00th=[13173], 80.00th=[13173], 90.00th=[13173], 95.00th=[13304], 00:21:24.574 | 99.00th=[13304], 99.50th=[13566], 99.90th=[16319], 99.95th=[16319], 00:21:24.574 | 99.99th=[16319] 00:21:24.574 bw ( KiB/s): min=28416, max=29184, per=33.33%, avg=29062.74, stdev=287.72, samples=19 00:21:24.574 iops : min= 222, max= 228, avg=227.05, stdev= 2.25, samples=19 00:21:24.574 lat (msec) : 20=100.00% 00:21:24.574 cpu : usr=90.72%, sys=8.42%, ctx=168, majf=0, minf=0 00:21:24.574 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:24.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.574 issued rwts: total=2271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:24.574 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:24.574 filename0: (groupid=0, jobs=1): err= 0: pid=83762: Mon Jun 10 08:16:44 2024 00:21:24.574 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(284MiB/10009msec) 00:21:24.574 slat (nsec): min=8120, max=50168, avg=15206.69, stdev=6952.25 00:21:24.574 clat (usec): min=9167, max=16870, avg=13165.30, stdev=229.27 00:21:24.574 lat (usec): min=9182, max=16897, avg=13180.51, stdev=229.48 00:21:24.574 clat percentiles (usec): 00:21:24.574 | 1.00th=[13042], 5.00th=[13173], 10.00th=[13173], 20.00th=[13173], 00:21:24.574 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13173], 00:21:24.574 | 70.00th=[13173], 80.00th=[13173], 90.00th=[13173], 95.00th=[13173], 00:21:24.574 | 99.00th=[13304], 99.50th=[13304], 99.90th=[16909], 99.95th=[16909], 00:21:24.574 | 99.99th=[16909] 00:21:24.574 bw ( KiB/s): min=28416, max=29184, per=33.38%, avg=29106.11, stdev=233.51, samples=19 00:21:24.574 iops : min= 222, max= 228, avg=227.37, stdev= 1.89, samples=19 00:21:24.574 lat (msec) : 10=0.13%, 20=99.87% 00:21:24.574 cpu : usr=91.36%, sys=8.10%, ctx=90, majf=0, minf=9 00:21:24.574 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:24.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.574 issued rwts: total=2274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:24.574 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:24.574 filename0: (groupid=0, jobs=1): err= 0: pid=83763: Mon Jun 10 08:16:44 2024 00:21:24.574 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(284MiB/10009msec) 00:21:24.574 slat (nsec): min=8076, max=65425, avg=17527.89, stdev=8377.32 00:21:24.574 clat (usec): min=9169, max=14619, avg=13159.12, stdev=167.48 00:21:24.574 lat (usec): min=9184, max=14644, avg=13176.64, stdev=168.29 00:21:24.574 clat percentiles (usec): 00:21:24.574 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13173], 20.00th=[13173], 00:21:24.574 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13173], 00:21:24.574 | 70.00th=[13173], 80.00th=[13173], 90.00th=[13173], 95.00th=[13304], 00:21:24.574 | 99.00th=[13304], 99.50th=[13304], 99.90th=[14615], 99.95th=[14615], 00:21:24.574 | 99.99th=[14615] 00:21:24.574 bw ( KiB/s): min=28416, max=29184, per=33.37%, avg=29103.16, stdev=242.15, samples=19 00:21:24.574 iops : min= 222, max= 228, avg=227.37, stdev= 1.89, samples=19 00:21:24.574 lat (msec) : 10=0.13%, 20=99.87% 00:21:24.574 cpu : usr=91.73%, sys=7.69%, ctx=139, majf=0, minf=0 00:21:24.574 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:24.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.574 issued rwts: total=2274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:24.574 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:24.574 00:21:24.574 Run status group 0 (all jobs): 00:21:24.574 READ: bw=85.2MiB/s (89.3MB/s), 28.4MiB/s-28.4MiB/s (29.8MB/s-29.8MB/s), io=852MiB (894MB), run=10001-10009msec 00:21:24.574 08:16:45 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:24.574 08:16:45 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:24.574 08:16:45 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:24.574 08:16:45 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:24.574 08:16:45 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:24.574 08:16:45 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:24.574 08:16:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.574 08:16:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:24.574 08:16:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.574 08:16:45 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:24.574 08:16:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.574 08:16:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:24.574 ************************************ 00:21:24.574 END TEST fio_dif_digest 00:21:24.574 ************************************ 00:21:24.574 08:16:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.574 00:21:24.574 real 0m11.027s 00:21:24.574 user 0m28.067s 00:21:24.574 sys 0m2.723s 00:21:24.574 08:16:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:24.574 08:16:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:24.574 08:16:45 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:24.574 08:16:45 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:24.574 08:16:45 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:24.574 08:16:45 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:21:24.574 08:16:45 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:24.574 08:16:45 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:21:24.574 08:16:45 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:24.574 08:16:45 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:24.574 rmmod nvme_tcp 00:21:24.574 rmmod nvme_fabrics 00:21:24.574 rmmod nvme_keyring 00:21:24.574 08:16:45 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:24.574 08:16:45 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:21:24.574 08:16:45 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:21:24.574 08:16:45 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 83003 ']' 00:21:24.574 08:16:45 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 83003 00:21:24.574 08:16:45 nvmf_dif -- common/autotest_common.sh@949 -- # '[' -z 83003 ']' 00:21:24.574 08:16:45 nvmf_dif -- common/autotest_common.sh@953 -- # kill -0 83003 00:21:24.574 08:16:45 nvmf_dif -- common/autotest_common.sh@954 -- # uname 00:21:24.574 08:16:45 nvmf_dif -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:24.574 08:16:45 nvmf_dif -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 83003 00:21:24.574 killing process with pid 83003 00:21:24.574 08:16:45 nvmf_dif -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:24.574 08:16:45 nvmf_dif -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:24.574 08:16:45 nvmf_dif -- common/autotest_common.sh@967 -- # echo 'killing process with pid 83003' 00:21:24.574 08:16:45 nvmf_dif -- common/autotest_common.sh@968 -- # kill 83003 00:21:24.574 08:16:45 nvmf_dif -- common/autotest_common.sh@973 -- # wait 83003 00:21:24.574 08:16:45 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:24.574 08:16:45 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:24.574 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:24.574 Waiting for block devices as requested 00:21:24.574 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:24.574 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:24.574 08:16:45 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:24.574 08:16:45 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:24.574 08:16:45 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:24.574 08:16:45 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:24.574 08:16:45 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.574 08:16:45 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:24.575 08:16:45 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.575 08:16:45 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:24.575 ************************************ 00:21:24.575 END TEST nvmf_dif 00:21:24.575 ************************************ 00:21:24.575 00:21:24.575 real 0m59.917s 00:21:24.575 user 3m48.846s 00:21:24.575 sys 0m18.341s 00:21:24.575 08:16:45 nvmf_dif -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:24.575 08:16:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:24.575 08:16:46 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:24.575 08:16:46 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:24.575 08:16:46 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:24.575 08:16:46 -- common/autotest_common.sh@10 -- # set +x 00:21:24.575 ************************************ 00:21:24.575 START TEST nvmf_abort_qd_sizes 00:21:24.575 ************************************ 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:24.575 * Looking for test storage... 00:21:24.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:24.575 Cannot find device "nvmf_tgt_br" 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:24.575 Cannot find device "nvmf_tgt_br2" 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:24.575 Cannot find device "nvmf_tgt_br" 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:24.575 Cannot find device "nvmf_tgt_br2" 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:24.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:24.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:24.575 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:24.576 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:24.576 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:24.576 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:24.576 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:24.576 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:24.576 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:24.576 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:24.576 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:24.576 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:24.576 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:24.576 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:24.576 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:24.576 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:24.576 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:24.576 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:24.576 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:24.576 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:24.576 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:24.576 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:24.834 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:24.834 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:24.834 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:24.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:21:24.834 00:21:24.834 --- 10.0.0.2 ping statistics --- 00:21:24.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.834 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:21:24.834 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:24.834 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:24.834 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:21:24.834 00:21:24.834 --- 10.0.0.3 ping statistics --- 00:21:24.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.834 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:21:24.834 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:24.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:21:24.834 00:21:24.834 --- 10.0.0.1 ping statistics --- 00:21:24.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.834 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:21:24.834 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.834 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:21:24.834 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:21:24.834 08:16:46 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:25.516 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:25.516 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:25.516 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:25.516 08:16:47 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:25.516 08:16:47 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:25.516 08:16:47 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:25.516 08:16:47 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:25.516 08:16:47 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:25.516 08:16:47 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:25.516 08:16:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:25.516 08:16:47 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:25.516 08:16:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:25.516 08:16:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:25.516 08:16:47 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=84355 00:21:25.516 08:16:47 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:25.516 08:16:47 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 84355 00:21:25.516 08:16:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # '[' -z 84355 ']' 00:21:25.516 08:16:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.516 08:16:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:25.516 08:16:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.516 08:16:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:25.775 08:16:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:25.775 [2024-06-10 08:16:47.423105] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:21:25.775 [2024-06-10 08:16:47.423228] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.775 [2024-06-10 08:16:47.567754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:26.033 [2024-06-10 08:16:47.704251] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.033 [2024-06-10 08:16:47.704606] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.033 [2024-06-10 08:16:47.704796] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.033 [2024-06-10 08:16:47.704971] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.033 [2024-06-10 08:16:47.705014] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.033 [2024-06-10 08:16:47.705300] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.033 [2024-06-10 08:16:47.705434] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.033 [2024-06-10 08:16:47.705528] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:21:26.033 [2024-06-10 08:16:47.705534] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.033 [2024-06-10 08:16:47.770231] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:26.600 08:16:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:26.600 08:16:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@863 -- # return 0 00:21:26.600 08:16:48 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:26.600 08:16:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:26.600 08:16:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:26.600 08:16:48 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.600 08:16:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:26.600 08:16:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:26.600 08:16:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:26.600 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:21:26.600 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:21:26.600 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:21:26.600 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:26.600 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:21:26.600 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:26.859 08:16:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:26.859 ************************************ 00:21:26.859 START TEST spdk_target_abort 00:21:26.859 ************************************ 00:21:26.859 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # spdk_target 00:21:26.859 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:26.859 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:26.859 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.859 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:26.859 spdk_targetn1 00:21:26.859 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:26.860 [2024-06-10 08:16:48.608423] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:26.860 [2024-06-10 08:16:48.640716] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:26.860 08:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:30.143 Initializing NVMe Controllers 00:21:30.143 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:30.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:30.143 Initialization complete. Launching workers. 00:21:30.143 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9070, failed: 0 00:21:30.143 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1022, failed to submit 8048 00:21:30.143 success 930, unsuccess 92, failed 0 00:21:30.143 08:16:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:30.143 08:16:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:33.442 Initializing NVMe Controllers 00:21:33.442 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:33.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:33.442 Initialization complete. Launching workers. 00:21:33.442 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8964, failed: 0 00:21:33.442 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1183, failed to submit 7781 00:21:33.442 success 349, unsuccess 834, failed 0 00:21:33.442 08:16:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:33.442 08:16:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:36.726 Initializing NVMe Controllers 00:21:36.726 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:36.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:36.726 Initialization complete. Launching workers. 00:21:36.726 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28798, failed: 0 00:21:36.726 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2245, failed to submit 26553 00:21:36.726 success 385, unsuccess 1860, failed 0 00:21:36.726 08:16:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:36.726 08:16:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:36.726 08:16:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:36.726 08:16:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:36.726 08:16:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:36.726 08:16:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:36.726 08:16:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:37.293 08:16:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.293 08:16:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84355 00:21:37.293 08:16:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@949 -- # '[' -z 84355 ']' 00:21:37.293 08:16:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # kill -0 84355 00:21:37.293 08:16:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # uname 00:21:37.293 08:16:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:37.293 08:16:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 84355 00:21:37.293 killing process with pid 84355 00:21:37.293 08:16:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:37.293 08:16:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:37.293 08:16:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 84355' 00:21:37.293 08:16:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # kill 84355 00:21:37.293 08:16:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # wait 84355 00:21:37.551 00:21:37.551 real 0m10.682s 00:21:37.551 user 0m42.965s 00:21:37.551 sys 0m2.170s 00:21:37.551 08:16:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:37.551 ************************************ 00:21:37.551 END TEST spdk_target_abort 00:21:37.551 08:16:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:37.551 ************************************ 00:21:37.551 08:16:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:37.551 08:16:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:37.551 08:16:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:37.551 08:16:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:37.551 ************************************ 00:21:37.551 START TEST kernel_target_abort 00:21:37.551 ************************************ 00:21:37.551 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # kernel_target 00:21:37.551 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:37.551 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:21:37.551 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:37.551 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:37.551 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:37.551 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:37.551 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:37.551 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:37.551 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:37.551 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:37.551 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:37.551 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:37.551 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:37.551 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:21:37.552 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:37.552 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:37.552 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:37.552 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:21:37.552 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:21:37.552 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:21:37.552 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:37.552 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:37.811 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:37.811 Waiting for block devices as requested 00:21:38.077 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:38.077 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:38.077 No valid GPT data, bailing 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # local device=nvme0n2 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:21:38.077 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:38.366 No valid GPT data, bailing 00:21:38.366 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:38.366 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:38.366 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:38.366 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:21:38.366 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:38.366 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:38.366 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:21:38.366 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # local device=nvme0n3 00:21:38.366 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:38.366 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:21:38.366 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:21:38.366 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:21:38.366 08:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:38.366 No valid GPT data, bailing 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:38.366 No valid GPT data, bailing 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab --hostid=0b063e5e-64f6-4b4f-b15f-bd51b74609ab -a 10.0.0.1 -t tcp -s 4420 00:21:38.366 00:21:38.366 Discovery Log Number of Records 2, Generation counter 2 00:21:38.366 =====Discovery Log Entry 0====== 00:21:38.366 trtype: tcp 00:21:38.366 adrfam: ipv4 00:21:38.366 subtype: current discovery subsystem 00:21:38.366 treq: not specified, sq flow control disable supported 00:21:38.366 portid: 1 00:21:38.366 trsvcid: 4420 00:21:38.366 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:38.366 traddr: 10.0.0.1 00:21:38.366 eflags: none 00:21:38.366 sectype: none 00:21:38.366 =====Discovery Log Entry 1====== 00:21:38.366 trtype: tcp 00:21:38.366 adrfam: ipv4 00:21:38.366 subtype: nvme subsystem 00:21:38.366 treq: not specified, sq flow control disable supported 00:21:38.366 portid: 1 00:21:38.366 trsvcid: 4420 00:21:38.366 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:38.366 traddr: 10.0.0.1 00:21:38.366 eflags: none 00:21:38.366 sectype: none 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:38.366 08:17:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:41.646 Initializing NVMe Controllers 00:21:41.646 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:41.646 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:41.646 Initialization complete. Launching workers. 00:21:41.646 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33707, failed: 0 00:21:41.646 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33707, failed to submit 0 00:21:41.646 success 0, unsuccess 33707, failed 0 00:21:41.646 08:17:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:41.646 08:17:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:44.928 Initializing NVMe Controllers 00:21:44.928 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:44.928 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:44.928 Initialization complete. Launching workers. 00:21:44.928 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72433, failed: 0 00:21:44.928 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31851, failed to submit 40582 00:21:44.928 success 0, unsuccess 31851, failed 0 00:21:44.928 08:17:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:44.928 08:17:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:48.209 Initializing NVMe Controllers 00:21:48.209 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:48.209 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:48.209 Initialization complete. Launching workers. 00:21:48.209 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 85227, failed: 0 00:21:48.209 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21358, failed to submit 63869 00:21:48.209 success 0, unsuccess 21358, failed 0 00:21:48.209 08:17:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:48.209 08:17:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:48.209 08:17:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:21:48.209 08:17:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:48.209 08:17:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:48.209 08:17:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:48.209 08:17:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:48.209 08:17:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:21:48.209 08:17:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:21:48.209 08:17:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:48.774 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:50.674 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:50.674 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:50.674 ************************************ 00:21:50.674 END TEST kernel_target_abort 00:21:50.674 ************************************ 00:21:50.674 00:21:50.674 real 0m13.240s 00:21:50.674 user 0m6.335s 00:21:50.674 sys 0m4.303s 00:21:50.674 08:17:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:50.674 08:17:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:50.674 08:17:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:50.674 08:17:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:50.674 08:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:50.674 08:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:21:50.932 08:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:50.932 08:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:21:50.932 08:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:50.932 08:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:50.932 rmmod nvme_tcp 00:21:50.932 rmmod nvme_fabrics 00:21:50.932 rmmod nvme_keyring 00:21:50.932 08:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:50.932 08:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:21:50.932 08:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:21:50.932 08:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 84355 ']' 00:21:50.932 08:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 84355 00:21:50.932 08:17:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@949 -- # '[' -z 84355 ']' 00:21:50.932 08:17:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@953 -- # kill -0 84355 00:21:50.932 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 953: kill: (84355) - No such process 00:21:50.932 Process with pid 84355 is not found 00:21:50.932 08:17:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@976 -- # echo 'Process with pid 84355 is not found' 00:21:50.932 08:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:50.932 08:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:51.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:51.190 Waiting for block devices as requested 00:21:51.449 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:51.449 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:51.449 08:17:13 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:51.449 08:17:13 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:51.449 08:17:13 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:51.449 08:17:13 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:51.449 08:17:13 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.449 08:17:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:51.449 08:17:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.449 08:17:13 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:51.449 ************************************ 00:21:51.449 END TEST nvmf_abort_qd_sizes 00:21:51.449 ************************************ 00:21:51.449 00:21:51.449 real 0m27.248s 00:21:51.449 user 0m50.493s 00:21:51.449 sys 0m7.880s 00:21:51.449 08:17:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:51.449 08:17:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:51.708 08:17:13 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:51.708 08:17:13 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:51.708 08:17:13 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:51.708 08:17:13 -- common/autotest_common.sh@10 -- # set +x 00:21:51.708 ************************************ 00:21:51.708 START TEST keyring_file 00:21:51.708 ************************************ 00:21:51.708 08:17:13 keyring_file -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:51.708 * Looking for test storage... 00:21:51.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:51.708 08:17:13 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:51.708 08:17:13 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:51.708 08:17:13 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.708 08:17:13 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.708 08:17:13 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.708 08:17:13 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.708 08:17:13 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.708 08:17:13 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.708 08:17:13 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:51.708 08:17:13 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@47 -- # : 0 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:51.708 08:17:13 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:51.708 08:17:13 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:51.708 08:17:13 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:51.708 08:17:13 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:51.708 08:17:13 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:51.708 08:17:13 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:51.708 08:17:13 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:51.708 08:17:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:51.708 08:17:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:51.708 08:17:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:51.708 08:17:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:51.708 08:17:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:51.708 08:17:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BqJTtmWd7F 00:21:51.708 08:17:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:51.708 08:17:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BqJTtmWd7F 00:21:51.708 08:17:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BqJTtmWd7F 00:21:51.708 08:17:13 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.BqJTtmWd7F 00:21:51.708 08:17:13 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:51.708 08:17:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:51.708 08:17:13 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:51.708 08:17:13 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:51.708 08:17:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:51.708 08:17:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:51.708 08:17:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.He87MTXhiI 00:21:51.708 08:17:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:51.708 08:17:13 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:51.708 08:17:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.He87MTXhiI 00:21:51.708 08:17:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.He87MTXhiI 00:21:51.967 08:17:13 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.He87MTXhiI 00:21:51.967 08:17:13 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:51.967 08:17:13 keyring_file -- keyring/file.sh@30 -- # tgtpid=85223 00:21:51.967 08:17:13 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85223 00:21:51.967 08:17:13 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 85223 ']' 00:21:51.967 08:17:13 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.967 08:17:13 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:51.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.967 08:17:13 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.967 08:17:13 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:51.967 08:17:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:51.967 [2024-06-10 08:17:13.638729] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:21:51.967 [2024-06-10 08:17:13.638836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85223 ] 00:21:51.967 [2024-06-10 08:17:13.776103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.225 [2024-06-10 08:17:13.942650] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.225 [2024-06-10 08:17:13.999456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:52.793 08:17:14 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:52.793 08:17:14 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:21:52.793 08:17:14 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:52.793 08:17:14 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:52.793 08:17:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:52.793 [2024-06-10 08:17:14.642324] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.051 null0 00:21:53.051 [2024-06-10 08:17:14.674306] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:53.051 [2024-06-10 08:17:14.674617] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:53.051 [2024-06-10 08:17:14.682304] tcp.c:3707:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:53.051 08:17:14 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:53.051 08:17:14 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:53.051 08:17:14 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:21:53.051 08:17:14 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:53.051 08:17:14 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:21:53.051 08:17:14 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:53.051 08:17:14 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:21:53.051 08:17:14 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:53.051 08:17:14 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:53.051 08:17:14 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:53.051 08:17:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:53.051 [2024-06-10 08:17:14.698324] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:53.051 request: 00:21:53.051 { 00:21:53.051 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:53.051 "secure_channel": false, 00:21:53.051 "listen_address": { 00:21:53.051 "trtype": "tcp", 00:21:53.052 "traddr": "127.0.0.1", 00:21:53.052 "trsvcid": "4420" 00:21:53.052 }, 00:21:53.052 "method": "nvmf_subsystem_add_listener", 00:21:53.052 "req_id": 1 00:21:53.052 } 00:21:53.052 Got JSON-RPC error response 00:21:53.052 response: 00:21:53.052 { 00:21:53.052 "code": -32602, 00:21:53.052 "message": "Invalid parameters" 00:21:53.052 } 00:21:53.052 08:17:14 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:21:53.052 08:17:14 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:21:53.052 08:17:14 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:53.052 08:17:14 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:53.052 08:17:14 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:53.052 08:17:14 keyring_file -- keyring/file.sh@46 -- # bperfpid=85240 00:21:53.052 08:17:14 keyring_file -- keyring/file.sh@48 -- # waitforlisten 85240 /var/tmp/bperf.sock 00:21:53.052 08:17:14 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 85240 ']' 00:21:53.052 08:17:14 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:53.052 08:17:14 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:53.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:53.052 08:17:14 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:53.052 08:17:14 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:53.052 08:17:14 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:53.052 08:17:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:53.052 [2024-06-10 08:17:14.761348] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:21:53.052 [2024-06-10 08:17:14.762013] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85240 ] 00:21:53.052 [2024-06-10 08:17:14.904143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.317 [2024-06-10 08:17:15.034708] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.317 [2024-06-10 08:17:15.099372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:53.889 08:17:15 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:53.889 08:17:15 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:21:53.889 08:17:15 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BqJTtmWd7F 00:21:53.889 08:17:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BqJTtmWd7F 00:21:54.148 08:17:15 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.He87MTXhiI 00:21:54.148 08:17:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.He87MTXhiI 00:21:54.406 08:17:16 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:21:54.406 08:17:16 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:21:54.406 08:17:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:54.406 08:17:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:54.406 08:17:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:54.665 08:17:16 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.BqJTtmWd7F == \/\t\m\p\/\t\m\p\.\B\q\J\T\t\m\W\d\7\F ]] 00:21:54.665 08:17:16 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:21:54.665 08:17:16 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:54.665 08:17:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:54.665 08:17:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:54.665 08:17:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:54.925 08:17:16 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.He87MTXhiI == \/\t\m\p\/\t\m\p\.\H\e\8\7\M\T\X\h\i\I ]] 00:21:54.925 08:17:16 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:21:54.925 08:17:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:54.925 08:17:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:54.925 08:17:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:54.925 08:17:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:54.925 08:17:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:55.492 08:17:17 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:21:55.492 08:17:17 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:21:55.492 08:17:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:55.492 08:17:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:55.492 08:17:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:55.492 08:17:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:55.492 08:17:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:55.492 08:17:17 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:55.492 08:17:17 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:55.492 08:17:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:55.751 [2024-06-10 08:17:17.519546] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:55.751 nvme0n1 00:21:55.751 08:17:17 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:21:55.751 08:17:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:55.751 08:17:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:55.751 08:17:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:55.751 08:17:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:55.751 08:17:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:56.318 08:17:17 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:21:56.318 08:17:17 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:21:56.318 08:17:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:56.318 08:17:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:56.318 08:17:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:56.318 08:17:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:56.318 08:17:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:56.318 08:17:18 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:21:56.318 08:17:18 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:56.577 Running I/O for 1 seconds... 00:21:57.514 00:21:57.514 Latency(us) 00:21:57.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.514 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:57.514 nvme0n1 : 1.01 11892.66 46.46 0.00 0.00 10721.94 3813.00 16681.89 00:21:57.514 =================================================================================================================== 00:21:57.514 Total : 11892.66 46.46 0.00 0.00 10721.94 3813.00 16681.89 00:21:57.514 0 00:21:57.514 08:17:19 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:57.514 08:17:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:57.773 08:17:19 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:21:57.773 08:17:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:57.773 08:17:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:57.773 08:17:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:57.773 08:17:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:57.773 08:17:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:58.031 08:17:19 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:21:58.031 08:17:19 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:21:58.031 08:17:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:58.031 08:17:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:58.031 08:17:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:58.031 08:17:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:58.031 08:17:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:58.290 08:17:20 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:58.290 08:17:20 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:58.290 08:17:20 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:21:58.290 08:17:20 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:58.290 08:17:20 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:21:58.290 08:17:20 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:58.290 08:17:20 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:21:58.290 08:17:20 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:58.290 08:17:20 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:58.290 08:17:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:58.548 [2024-06-10 08:17:20.340109] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:58.548 [2024-06-10 08:17:20.340722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4b03a0 (107): Transport endpoint is not connected 00:21:58.548 [2024-06-10 08:17:20.341711] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4b03a0 (9): Bad file descriptor 00:21:58.548 [2024-06-10 08:17:20.342707] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:58.548 [2024-06-10 08:17:20.342734] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:58.548 [2024-06-10 08:17:20.342745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:58.548 request: 00:21:58.548 { 00:21:58.548 "name": "nvme0", 00:21:58.548 "trtype": "tcp", 00:21:58.548 "traddr": "127.0.0.1", 00:21:58.548 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:58.548 "adrfam": "ipv4", 00:21:58.548 "trsvcid": "4420", 00:21:58.548 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:58.548 "psk": "key1", 00:21:58.548 "method": "bdev_nvme_attach_controller", 00:21:58.548 "req_id": 1 00:21:58.548 } 00:21:58.548 Got JSON-RPC error response 00:21:58.548 response: 00:21:58.548 { 00:21:58.548 "code": -5, 00:21:58.548 "message": "Input/output error" 00:21:58.548 } 00:21:58.548 08:17:20 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:21:58.548 08:17:20 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:58.548 08:17:20 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:58.548 08:17:20 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:58.548 08:17:20 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:21:58.548 08:17:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:58.548 08:17:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:58.548 08:17:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:58.548 08:17:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:58.548 08:17:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:58.807 08:17:20 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:21:58.807 08:17:20 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:21:58.807 08:17:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:58.807 08:17:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:58.807 08:17:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:58.807 08:17:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:58.807 08:17:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:59.065 08:17:20 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:59.065 08:17:20 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:21:59.065 08:17:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:59.324 08:17:21 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:21:59.324 08:17:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:59.583 08:17:21 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:21:59.583 08:17:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:59.583 08:17:21 keyring_file -- keyring/file.sh@77 -- # jq length 00:21:59.842 08:17:21 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:21:59.842 08:17:21 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.BqJTtmWd7F 00:21:59.842 08:17:21 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.BqJTtmWd7F 00:21:59.842 08:17:21 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:21:59.842 08:17:21 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.BqJTtmWd7F 00:21:59.842 08:17:21 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:21:59.842 08:17:21 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:59.842 08:17:21 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:21:59.842 08:17:21 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:59.842 08:17:21 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BqJTtmWd7F 00:21:59.842 08:17:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BqJTtmWd7F 00:22:00.100 [2024-06-10 08:17:21.916106] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BqJTtmWd7F': 0100660 00:22:00.100 [2024-06-10 08:17:21.916167] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:00.100 request: 00:22:00.100 { 00:22:00.100 "name": "key0", 00:22:00.100 "path": "/tmp/tmp.BqJTtmWd7F", 00:22:00.100 "method": "keyring_file_add_key", 00:22:00.100 "req_id": 1 00:22:00.100 } 00:22:00.100 Got JSON-RPC error response 00:22:00.100 response: 00:22:00.100 { 00:22:00.100 "code": -1, 00:22:00.100 "message": "Operation not permitted" 00:22:00.100 } 00:22:00.100 08:17:21 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:22:00.100 08:17:21 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:00.100 08:17:21 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:00.100 08:17:21 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:00.100 08:17:21 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.BqJTtmWd7F 00:22:00.100 08:17:21 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BqJTtmWd7F 00:22:00.100 08:17:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BqJTtmWd7F 00:22:00.359 08:17:22 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.BqJTtmWd7F 00:22:00.359 08:17:22 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:22:00.359 08:17:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:00.359 08:17:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:00.359 08:17:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:00.359 08:17:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:00.359 08:17:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:00.617 08:17:22 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:22:00.617 08:17:22 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:00.617 08:17:22 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:22:00.617 08:17:22 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:00.617 08:17:22 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:22:00.617 08:17:22 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:00.617 08:17:22 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:22:00.617 08:17:22 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:00.617 08:17:22 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:00.617 08:17:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:00.876 [2024-06-10 08:17:22.679039] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.BqJTtmWd7F': No such file or directory 00:22:00.876 [2024-06-10 08:17:22.679098] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:22:00.876 [2024-06-10 08:17:22.679125] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:22:00.876 [2024-06-10 08:17:22.679135] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:00.876 [2024-06-10 08:17:22.679144] bdev_nvme.c:6263:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:22:00.876 request: 00:22:00.876 { 00:22:00.876 "name": "nvme0", 00:22:00.876 "trtype": "tcp", 00:22:00.876 "traddr": "127.0.0.1", 00:22:00.876 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:00.876 "adrfam": "ipv4", 00:22:00.876 "trsvcid": "4420", 00:22:00.876 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:00.876 "psk": "key0", 00:22:00.876 "method": "bdev_nvme_attach_controller", 00:22:00.876 "req_id": 1 00:22:00.876 } 00:22:00.876 Got JSON-RPC error response 00:22:00.876 response: 00:22:00.876 { 00:22:00.876 "code": -19, 00:22:00.876 "message": "No such device" 00:22:00.876 } 00:22:00.876 08:17:22 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:22:00.876 08:17:22 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:00.876 08:17:22 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:00.876 08:17:22 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:00.876 08:17:22 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:22:00.876 08:17:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:01.134 08:17:22 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:01.134 08:17:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:01.134 08:17:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:01.134 08:17:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:01.134 08:17:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:01.134 08:17:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:01.134 08:17:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XolKUF4kMI 00:22:01.134 08:17:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:01.135 08:17:22 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:01.135 08:17:22 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:22:01.135 08:17:22 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:01.135 08:17:22 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:01.135 08:17:22 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:22:01.135 08:17:22 keyring_file -- nvmf/common.sh@705 -- # python - 00:22:01.478 08:17:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XolKUF4kMI 00:22:01.478 08:17:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XolKUF4kMI 00:22:01.478 08:17:23 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.XolKUF4kMI 00:22:01.478 08:17:23 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XolKUF4kMI 00:22:01.478 08:17:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XolKUF4kMI 00:22:01.478 08:17:23 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:01.478 08:17:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:01.736 nvme0n1 00:22:01.736 08:17:23 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:22:01.736 08:17:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:01.736 08:17:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:01.736 08:17:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:01.736 08:17:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:01.736 08:17:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:02.306 08:17:23 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:22:02.306 08:17:23 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:22:02.306 08:17:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:02.306 08:17:24 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:22:02.306 08:17:24 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:22:02.306 08:17:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:02.306 08:17:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:02.306 08:17:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:02.565 08:17:24 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:22:02.565 08:17:24 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:22:02.565 08:17:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:02.565 08:17:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:02.565 08:17:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:02.565 08:17:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:02.565 08:17:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:02.824 08:17:24 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:22:02.824 08:17:24 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:02.824 08:17:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:03.082 08:17:24 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:22:03.082 08:17:24 keyring_file -- keyring/file.sh@104 -- # jq length 00:22:03.082 08:17:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:03.341 08:17:25 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:22:03.341 08:17:25 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XolKUF4kMI 00:22:03.341 08:17:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XolKUF4kMI 00:22:03.600 08:17:25 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.He87MTXhiI 00:22:03.600 08:17:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.He87MTXhiI 00:22:03.859 08:17:25 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:03.859 08:17:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:04.119 nvme0n1 00:22:04.119 08:17:25 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:22:04.119 08:17:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:22:04.687 08:17:26 keyring_file -- keyring/file.sh@112 -- # config='{ 00:22:04.687 "subsystems": [ 00:22:04.687 { 00:22:04.687 "subsystem": "keyring", 00:22:04.687 "config": [ 00:22:04.687 { 00:22:04.687 "method": "keyring_file_add_key", 00:22:04.687 "params": { 00:22:04.687 "name": "key0", 00:22:04.687 "path": "/tmp/tmp.XolKUF4kMI" 00:22:04.687 } 00:22:04.687 }, 00:22:04.687 { 00:22:04.687 "method": "keyring_file_add_key", 00:22:04.687 "params": { 00:22:04.687 "name": "key1", 00:22:04.687 "path": "/tmp/tmp.He87MTXhiI" 00:22:04.687 } 00:22:04.687 } 00:22:04.687 ] 00:22:04.687 }, 00:22:04.687 { 00:22:04.687 "subsystem": "iobuf", 00:22:04.687 "config": [ 00:22:04.687 { 00:22:04.687 "method": "iobuf_set_options", 00:22:04.687 "params": { 00:22:04.687 "small_pool_count": 8192, 00:22:04.687 "large_pool_count": 1024, 00:22:04.687 "small_bufsize": 8192, 00:22:04.687 "large_bufsize": 135168 00:22:04.687 } 00:22:04.687 } 00:22:04.687 ] 00:22:04.687 }, 00:22:04.687 { 00:22:04.687 "subsystem": "sock", 00:22:04.687 "config": [ 00:22:04.687 { 00:22:04.687 "method": "sock_set_default_impl", 00:22:04.687 "params": { 00:22:04.687 "impl_name": "uring" 00:22:04.687 } 00:22:04.687 }, 00:22:04.687 { 00:22:04.687 "method": "sock_impl_set_options", 00:22:04.687 "params": { 00:22:04.687 "impl_name": "ssl", 00:22:04.687 "recv_buf_size": 4096, 00:22:04.687 "send_buf_size": 4096, 00:22:04.687 "enable_recv_pipe": true, 00:22:04.687 "enable_quickack": false, 00:22:04.687 "enable_placement_id": 0, 00:22:04.687 "enable_zerocopy_send_server": true, 00:22:04.687 "enable_zerocopy_send_client": false, 00:22:04.687 "zerocopy_threshold": 0, 00:22:04.687 "tls_version": 0, 00:22:04.687 "enable_ktls": false 00:22:04.687 } 00:22:04.687 }, 00:22:04.687 { 00:22:04.687 "method": "sock_impl_set_options", 00:22:04.687 "params": { 00:22:04.687 "impl_name": "posix", 00:22:04.687 "recv_buf_size": 2097152, 00:22:04.687 "send_buf_size": 2097152, 00:22:04.687 "enable_recv_pipe": true, 00:22:04.687 "enable_quickack": false, 00:22:04.687 "enable_placement_id": 0, 00:22:04.687 "enable_zerocopy_send_server": true, 00:22:04.687 "enable_zerocopy_send_client": false, 00:22:04.687 "zerocopy_threshold": 0, 00:22:04.687 "tls_version": 0, 00:22:04.687 "enable_ktls": false 00:22:04.687 } 00:22:04.687 }, 00:22:04.687 { 00:22:04.687 "method": "sock_impl_set_options", 00:22:04.687 "params": { 00:22:04.687 "impl_name": "uring", 00:22:04.687 "recv_buf_size": 2097152, 00:22:04.687 "send_buf_size": 2097152, 00:22:04.687 "enable_recv_pipe": true, 00:22:04.687 "enable_quickack": false, 00:22:04.687 "enable_placement_id": 0, 00:22:04.687 "enable_zerocopy_send_server": false, 00:22:04.687 "enable_zerocopy_send_client": false, 00:22:04.687 "zerocopy_threshold": 0, 00:22:04.687 "tls_version": 0, 00:22:04.687 "enable_ktls": false 00:22:04.687 } 00:22:04.687 } 00:22:04.687 ] 00:22:04.687 }, 00:22:04.687 { 00:22:04.687 "subsystem": "vmd", 00:22:04.687 "config": [] 00:22:04.687 }, 00:22:04.687 { 00:22:04.687 "subsystem": "accel", 00:22:04.687 "config": [ 00:22:04.687 { 00:22:04.687 "method": "accel_set_options", 00:22:04.687 "params": { 00:22:04.687 "small_cache_size": 128, 00:22:04.687 "large_cache_size": 16, 00:22:04.687 "task_count": 2048, 00:22:04.687 "sequence_count": 2048, 00:22:04.687 "buf_count": 2048 00:22:04.687 } 00:22:04.687 } 00:22:04.687 ] 00:22:04.687 }, 00:22:04.687 { 00:22:04.687 "subsystem": "bdev", 00:22:04.687 "config": [ 00:22:04.687 { 00:22:04.687 "method": "bdev_set_options", 00:22:04.687 "params": { 00:22:04.687 "bdev_io_pool_size": 65535, 00:22:04.687 "bdev_io_cache_size": 256, 00:22:04.687 "bdev_auto_examine": true, 00:22:04.687 "iobuf_small_cache_size": 128, 00:22:04.687 "iobuf_large_cache_size": 16 00:22:04.687 } 00:22:04.687 }, 00:22:04.687 { 00:22:04.687 "method": "bdev_raid_set_options", 00:22:04.687 "params": { 00:22:04.687 "process_window_size_kb": 1024 00:22:04.687 } 00:22:04.687 }, 00:22:04.687 { 00:22:04.687 "method": "bdev_iscsi_set_options", 00:22:04.687 "params": { 00:22:04.687 "timeout_sec": 30 00:22:04.687 } 00:22:04.687 }, 00:22:04.687 { 00:22:04.687 "method": "bdev_nvme_set_options", 00:22:04.687 "params": { 00:22:04.687 "action_on_timeout": "none", 00:22:04.687 "timeout_us": 0, 00:22:04.687 "timeout_admin_us": 0, 00:22:04.687 "keep_alive_timeout_ms": 10000, 00:22:04.687 "arbitration_burst": 0, 00:22:04.687 "low_priority_weight": 0, 00:22:04.687 "medium_priority_weight": 0, 00:22:04.687 "high_priority_weight": 0, 00:22:04.687 "nvme_adminq_poll_period_us": 10000, 00:22:04.687 "nvme_ioq_poll_period_us": 0, 00:22:04.687 "io_queue_requests": 512, 00:22:04.687 "delay_cmd_submit": true, 00:22:04.687 "transport_retry_count": 4, 00:22:04.687 "bdev_retry_count": 3, 00:22:04.687 "transport_ack_timeout": 0, 00:22:04.687 "ctrlr_loss_timeout_sec": 0, 00:22:04.687 "reconnect_delay_sec": 0, 00:22:04.687 "fast_io_fail_timeout_sec": 0, 00:22:04.687 "disable_auto_failback": false, 00:22:04.687 "generate_uuids": false, 00:22:04.687 "transport_tos": 0, 00:22:04.687 "nvme_error_stat": false, 00:22:04.687 "rdma_srq_size": 0, 00:22:04.687 "io_path_stat": false, 00:22:04.687 "allow_accel_sequence": false, 00:22:04.687 "rdma_max_cq_size": 0, 00:22:04.687 "rdma_cm_event_timeout_ms": 0, 00:22:04.687 "dhchap_digests": [ 00:22:04.687 "sha256", 00:22:04.687 "sha384", 00:22:04.687 "sha512" 00:22:04.687 ], 00:22:04.687 "dhchap_dhgroups": [ 00:22:04.687 "null", 00:22:04.687 "ffdhe2048", 00:22:04.687 "ffdhe3072", 00:22:04.687 "ffdhe4096", 00:22:04.687 "ffdhe6144", 00:22:04.687 "ffdhe8192" 00:22:04.687 ] 00:22:04.687 } 00:22:04.687 }, 00:22:04.687 { 00:22:04.687 "method": "bdev_nvme_attach_controller", 00:22:04.687 "params": { 00:22:04.687 "name": "nvme0", 00:22:04.687 "trtype": "TCP", 00:22:04.687 "adrfam": "IPv4", 00:22:04.687 "traddr": "127.0.0.1", 00:22:04.687 "trsvcid": "4420", 00:22:04.687 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:04.687 "prchk_reftag": false, 00:22:04.687 "prchk_guard": false, 00:22:04.687 "ctrlr_loss_timeout_sec": 0, 00:22:04.687 "reconnect_delay_sec": 0, 00:22:04.687 "fast_io_fail_timeout_sec": 0, 00:22:04.687 "psk": "key0", 00:22:04.687 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:04.687 "hdgst": false, 00:22:04.687 "ddgst": false 00:22:04.687 } 00:22:04.687 }, 00:22:04.687 { 00:22:04.687 "method": "bdev_nvme_set_hotplug", 00:22:04.687 "params": { 00:22:04.687 "period_us": 100000, 00:22:04.687 "enable": false 00:22:04.687 } 00:22:04.687 }, 00:22:04.687 { 00:22:04.687 "method": "bdev_wait_for_examine" 00:22:04.687 } 00:22:04.687 ] 00:22:04.687 }, 00:22:04.687 { 00:22:04.687 "subsystem": "nbd", 00:22:04.687 "config": [] 00:22:04.687 } 00:22:04.687 ] 00:22:04.687 }' 00:22:04.687 08:17:26 keyring_file -- keyring/file.sh@114 -- # killprocess 85240 00:22:04.687 08:17:26 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 85240 ']' 00:22:04.687 08:17:26 keyring_file -- common/autotest_common.sh@953 -- # kill -0 85240 00:22:04.687 08:17:26 keyring_file -- common/autotest_common.sh@954 -- # uname 00:22:04.687 08:17:26 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:04.688 08:17:26 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 85240 00:22:04.688 killing process with pid 85240 00:22:04.688 Received shutdown signal, test time was about 1.000000 seconds 00:22:04.688 00:22:04.688 Latency(us) 00:22:04.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.688 =================================================================================================================== 00:22:04.688 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:04.688 08:17:26 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:04.688 08:17:26 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:04.688 08:17:26 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 85240' 00:22:04.688 08:17:26 keyring_file -- common/autotest_common.sh@968 -- # kill 85240 00:22:04.688 08:17:26 keyring_file -- common/autotest_common.sh@973 -- # wait 85240 00:22:04.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:04.947 08:17:26 keyring_file -- keyring/file.sh@117 -- # bperfpid=85489 00:22:04.947 08:17:26 keyring_file -- keyring/file.sh@119 -- # waitforlisten 85489 /var/tmp/bperf.sock 00:22:04.947 08:17:26 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 85489 ']' 00:22:04.947 08:17:26 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:04.947 08:17:26 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:04.947 08:17:26 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:22:04.947 "subsystems": [ 00:22:04.947 { 00:22:04.947 "subsystem": "keyring", 00:22:04.947 "config": [ 00:22:04.947 { 00:22:04.947 "method": "keyring_file_add_key", 00:22:04.947 "params": { 00:22:04.947 "name": "key0", 00:22:04.947 "path": "/tmp/tmp.XolKUF4kMI" 00:22:04.947 } 00:22:04.947 }, 00:22:04.947 { 00:22:04.947 "method": "keyring_file_add_key", 00:22:04.947 "params": { 00:22:04.947 "name": "key1", 00:22:04.947 "path": "/tmp/tmp.He87MTXhiI" 00:22:04.947 } 00:22:04.947 } 00:22:04.947 ] 00:22:04.947 }, 00:22:04.947 { 00:22:04.947 "subsystem": "iobuf", 00:22:04.947 "config": [ 00:22:04.947 { 00:22:04.947 "method": "iobuf_set_options", 00:22:04.947 "params": { 00:22:04.947 "small_pool_count": 8192, 00:22:04.947 "large_pool_count": 1024, 00:22:04.947 "small_bufsize": 8192, 00:22:04.947 "large_bufsize": 135168 00:22:04.947 } 00:22:04.947 } 00:22:04.947 ] 00:22:04.947 }, 00:22:04.947 { 00:22:04.947 "subsystem": "sock", 00:22:04.947 "config": [ 00:22:04.947 { 00:22:04.947 "method": "sock_set_default_impl", 00:22:04.947 "params": { 00:22:04.947 "impl_name": "uring" 00:22:04.947 } 00:22:04.947 }, 00:22:04.947 { 00:22:04.947 "method": "sock_impl_set_options", 00:22:04.947 "params": { 00:22:04.947 "impl_name": "ssl", 00:22:04.947 "recv_buf_size": 4096, 00:22:04.947 "send_buf_size": 4096, 00:22:04.947 "enable_recv_pipe": true, 00:22:04.947 "enable_quickack": false, 00:22:04.947 "enable_placement_id": 0, 00:22:04.947 "enable_zerocopy_send_server": true, 00:22:04.947 "enable_zerocopy_send_client": false, 00:22:04.947 "zerocopy_threshold": 0, 00:22:04.947 "tls_version": 0, 00:22:04.947 "enable_ktls": false 00:22:04.947 } 00:22:04.947 }, 00:22:04.947 { 00:22:04.947 "method": "sock_impl_set_options", 00:22:04.947 "params": { 00:22:04.947 "impl_name": "posix", 00:22:04.947 "recv_buf_size": 2097152, 00:22:04.947 "send_buf_size": 2097152, 00:22:04.947 "enable_recv_pipe": true, 00:22:04.947 "enable_quickack": false, 00:22:04.947 "enable_placement_id": 0, 00:22:04.947 "enable_zerocopy_send_server": true, 00:22:04.947 "enable_zerocopy_send_client": false, 00:22:04.947 "zerocopy_threshold": 0, 00:22:04.947 "tls_version": 0, 00:22:04.947 "enable_ktls": false 00:22:04.947 } 00:22:04.947 }, 00:22:04.947 { 00:22:04.947 "method": "sock_impl_set_options", 00:22:04.947 "params": { 00:22:04.947 "impl_name": "uring", 00:22:04.947 "recv_buf_size": 2097152, 00:22:04.947 "send_buf_size": 2097152, 00:22:04.947 "enable_recv_pipe": true, 00:22:04.947 "enable_quickack": false, 00:22:04.947 "enable_placement_id": 0, 00:22:04.947 "enable_zerocopy_send_server": false, 00:22:04.947 "enable_zerocopy_send_client": false, 00:22:04.947 "zerocopy_threshold": 0, 00:22:04.947 "tls_version": 0, 00:22:04.947 "enable_ktls": false 00:22:04.947 } 00:22:04.947 } 00:22:04.947 ] 00:22:04.947 }, 00:22:04.947 { 00:22:04.947 "subsystem": "vmd", 00:22:04.947 "config": [] 00:22:04.947 }, 00:22:04.947 { 00:22:04.947 "subsystem": "accel", 00:22:04.947 "config": [ 00:22:04.947 { 00:22:04.947 "method": "accel_set_options", 00:22:04.947 "params": { 00:22:04.947 "small_cache_size": 128, 00:22:04.947 "large_cache_size": 16, 00:22:04.947 "task_count": 2048, 00:22:04.947 "sequence_count": 2048, 00:22:04.947 "buf_count": 2048 00:22:04.947 } 00:22:04.947 } 00:22:04.947 ] 00:22:04.947 }, 00:22:04.947 { 00:22:04.947 "subsystem": "bdev", 00:22:04.947 "config": [ 00:22:04.947 { 00:22:04.947 "method": "bdev_set_options", 00:22:04.947 "params": { 00:22:04.947 "bdev_io_pool_size": 65535, 00:22:04.947 "bdev_io_cache_size": 256, 00:22:04.947 "bdev_auto_examine": true, 00:22:04.947 "iobuf_small_cache_size": 128, 00:22:04.947 "iobuf_large_cache_size": 16 00:22:04.947 } 00:22:04.947 }, 00:22:04.947 { 00:22:04.947 "method": "bdev_raid_set_options", 00:22:04.947 "params": { 00:22:04.947 "process_window_size_kb": 1024 00:22:04.947 } 00:22:04.947 }, 00:22:04.947 { 00:22:04.947 "method": "bdev_iscsi_set_options", 00:22:04.947 "params": { 00:22:04.947 "timeout_sec": 30 00:22:04.947 } 00:22:04.947 }, 00:22:04.947 { 00:22:04.947 "method": "bdev_nvme_set_options", 00:22:04.947 "params": { 00:22:04.947 "action_on_timeout": "none", 00:22:04.947 "timeout_us": 0, 00:22:04.947 "timeout_admin_us": 0, 00:22:04.947 "keep_alive_timeout_ms": 10000, 00:22:04.947 "arbitration_burst": 0, 00:22:04.947 "low_priority_weight": 0, 00:22:04.947 "medium_priority_weight": 0, 00:22:04.947 "high_priority_weight": 0, 00:22:04.947 "nvme_adminq_poll_period_us": 10000, 00:22:04.947 "nvme_ioq_poll_period_us": 0, 00:22:04.947 "io_queue_requests": 512, 00:22:04.947 "delay_cmd_submit": true, 00:22:04.947 "transport_retry_count": 4, 00:22:04.947 "bdev_retry_count": 3, 00:22:04.947 "transport_ack_timeout": 0, 00:22:04.947 "ctrlr_loss_timeout_sec": 0, 00:22:04.947 "reconnect_delay_sec": 0, 00:22:04.947 "fast_io_fail_timeout_sec": 0, 00:22:04.947 "disable_auto_failback": false, 00:22:04.947 "generate_uuids": false, 00:22:04.947 "transport_tos": 0, 00:22:04.947 "nvme_error_stat": false, 00:22:04.947 "rdma_srq_size": 0, 00:22:04.947 "io_path_stat": false, 00:22:04.947 "allow_accel_sequence": false, 00:22:04.947 "rdma_max_cq_size": 0, 00:22:04.947 "rdma_cm_event_timeout_ms": 0, 00:22:04.947 "dhchap_digests": [ 00:22:04.947 "sha256", 00:22:04.947 "sha384", 00:22:04.947 "sha512" 00:22:04.947 ], 00:22:04.947 "dhchap_dhgroups": [ 00:22:04.947 "null", 00:22:04.947 "ffdhe2048", 00:22:04.947 "ffdhe3072", 00:22:04.947 "ffdhe4096", 00:22:04.947 "ffdhe6144", 00:22:04.947 "ffdhe8192" 00:22:04.947 ] 00:22:04.947 } 00:22:04.947 }, 00:22:04.947 { 00:22:04.947 "method": "bdev_nvme_attach_controller", 00:22:04.947 "params": { 00:22:04.947 "name": "nvme0", 00:22:04.947 "trtype": "TCP", 00:22:04.947 "adrfam": "IPv4", 00:22:04.947 "traddr": "127.0.0.1", 00:22:04.947 "trsvcid": "4420", 00:22:04.947 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:04.947 "prchk_reftag": false, 00:22:04.947 "prchk_guard": false, 00:22:04.947 "ctrlr_loss_timeout_sec": 0, 00:22:04.947 "reconnect_delay_sec": 0, 00:22:04.947 "fast_io_fail_timeout_sec": 0, 00:22:04.948 "psk": "key0", 00:22:04.948 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:04.948 "hdgst": false, 00:22:04.948 "ddgst": false 00:22:04.948 } 00:22:04.948 }, 00:22:04.948 { 00:22:04.948 "method": "bdev_nvme_set_hotplug", 00:22:04.948 "params": { 00:22:04.948 "period_us": 100000, 00:22:04.948 "enable": false 00:22:04.948 } 00:22:04.948 }, 00:22:04.948 { 00:22:04.948 "method": "bdev_wait_for_examine" 00:22:04.948 } 00:22:04.948 ] 00:22:04.948 }, 00:22:04.948 { 00:22:04.948 "subsystem": "nbd", 00:22:04.948 "config": [] 00:22:04.948 } 00:22:04.948 ] 00:22:04.948 }' 00:22:04.948 08:17:26 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:22:04.948 08:17:26 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:04.948 08:17:26 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:04.948 08:17:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:04.948 [2024-06-10 08:17:26.598541] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:22:04.948 [2024-06-10 08:17:26.598643] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85489 ] 00:22:04.948 [2024-06-10 08:17:26.735230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.207 [2024-06-10 08:17:26.853661] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.207 [2024-06-10 08:17:26.998454] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:05.207 [2024-06-10 08:17:27.055294] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:05.774 08:17:27 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:05.774 08:17:27 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:22:05.774 08:17:27 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:22:05.774 08:17:27 keyring_file -- keyring/file.sh@120 -- # jq length 00:22:05.774 08:17:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:06.033 08:17:27 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:22:06.033 08:17:27 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:22:06.033 08:17:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:06.033 08:17:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:06.033 08:17:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:06.033 08:17:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:06.033 08:17:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:06.292 08:17:28 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:22:06.292 08:17:28 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:22:06.292 08:17:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:06.292 08:17:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:06.292 08:17:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:06.292 08:17:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:06.292 08:17:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:06.551 08:17:28 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:22:06.551 08:17:28 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:22:06.551 08:17:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:22:06.551 08:17:28 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:22:06.811 08:17:28 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:22:06.811 08:17:28 keyring_file -- keyring/file.sh@1 -- # cleanup 00:22:06.811 08:17:28 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.XolKUF4kMI /tmp/tmp.He87MTXhiI 00:22:06.811 08:17:28 keyring_file -- keyring/file.sh@20 -- # killprocess 85489 00:22:06.811 08:17:28 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 85489 ']' 00:22:06.811 08:17:28 keyring_file -- common/autotest_common.sh@953 -- # kill -0 85489 00:22:06.811 08:17:28 keyring_file -- common/autotest_common.sh@954 -- # uname 00:22:06.811 08:17:28 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:06.811 08:17:28 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 85489 00:22:06.811 killing process with pid 85489 00:22:06.811 Received shutdown signal, test time was about 1.000000 seconds 00:22:06.811 00:22:06.811 Latency(us) 00:22:06.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.811 =================================================================================================================== 00:22:06.811 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:06.811 08:17:28 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:06.811 08:17:28 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:06.811 08:17:28 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 85489' 00:22:06.811 08:17:28 keyring_file -- common/autotest_common.sh@968 -- # kill 85489 00:22:06.811 08:17:28 keyring_file -- common/autotest_common.sh@973 -- # wait 85489 00:22:07.070 08:17:28 keyring_file -- keyring/file.sh@21 -- # killprocess 85223 00:22:07.070 08:17:28 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 85223 ']' 00:22:07.070 08:17:28 keyring_file -- common/autotest_common.sh@953 -- # kill -0 85223 00:22:07.070 08:17:28 keyring_file -- common/autotest_common.sh@954 -- # uname 00:22:07.070 08:17:28 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:07.070 08:17:28 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 85223 00:22:07.330 killing process with pid 85223 00:22:07.330 08:17:28 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:07.330 08:17:28 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:07.330 08:17:28 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 85223' 00:22:07.330 08:17:28 keyring_file -- common/autotest_common.sh@968 -- # kill 85223 00:22:07.330 [2024-06-10 08:17:28.947019] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:07.330 08:17:28 keyring_file -- common/autotest_common.sh@973 -- # wait 85223 00:22:07.589 00:22:07.589 real 0m16.023s 00:22:07.589 user 0m39.697s 00:22:07.589 sys 0m3.195s 00:22:07.589 08:17:29 keyring_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:07.589 ************************************ 00:22:07.589 END TEST keyring_file 00:22:07.589 ************************************ 00:22:07.589 08:17:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:07.589 08:17:29 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:22:07.589 08:17:29 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:07.589 08:17:29 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:22:07.589 08:17:29 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:07.589 08:17:29 -- common/autotest_common.sh@10 -- # set +x 00:22:07.589 ************************************ 00:22:07.589 START TEST keyring_linux 00:22:07.589 ************************************ 00:22:07.589 08:17:29 keyring_linux -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:07.850 * Looking for test storage... 00:22:07.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:07.850 08:17:29 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:07.850 08:17:29 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=0b063e5e-64f6-4b4f-b15f-bd51b74609ab 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:07.850 08:17:29 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.850 08:17:29 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.850 08:17:29 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.850 08:17:29 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.850 08:17:29 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.850 08:17:29 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.850 08:17:29 keyring_linux -- paths/export.sh@5 -- # export PATH 00:22:07.850 08:17:29 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:07.850 08:17:29 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:07.850 08:17:29 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:07.850 08:17:29 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:07.850 08:17:29 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:22:07.850 08:17:29 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:22:07.850 08:17:29 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:22:07.850 08:17:29 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:22:07.850 08:17:29 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:07.850 08:17:29 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:22:07.850 08:17:29 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:07.850 08:17:29 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:07.850 08:17:29 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:22:07.850 08:17:29 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:22:07.850 08:17:29 keyring_linux -- nvmf/common.sh@705 -- # python - 00:22:07.850 08:17:29 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:22:07.850 /tmp/:spdk-test:key0 00:22:07.850 08:17:29 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:22:07.850 08:17:29 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:22:07.850 08:17:29 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:07.850 08:17:29 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:22:07.850 08:17:29 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:07.850 08:17:29 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:07.850 08:17:29 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:22:07.850 08:17:29 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:07.851 08:17:29 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:07.851 08:17:29 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:22:07.851 08:17:29 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:07.851 08:17:29 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:22:07.851 08:17:29 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:22:07.851 08:17:29 keyring_linux -- nvmf/common.sh@705 -- # python - 00:22:07.851 08:17:29 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:22:07.851 /tmp/:spdk-test:key1 00:22:07.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.851 08:17:29 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:22:07.851 08:17:29 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85603 00:22:07.851 08:17:29 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:07.851 08:17:29 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85603 00:22:07.851 08:17:29 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 85603 ']' 00:22:07.851 08:17:29 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.851 08:17:29 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:07.851 08:17:29 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.851 08:17:29 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:07.851 08:17:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:07.851 [2024-06-10 08:17:29.685282] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:22:07.851 [2024-06-10 08:17:29.685709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85603 ] 00:22:08.110 [2024-06-10 08:17:29.825505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.110 [2024-06-10 08:17:29.945288] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.370 [2024-06-10 08:17:30.001423] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:08.939 08:17:30 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:08.939 08:17:30 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:22:08.939 08:17:30 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:22:08.939 08:17:30 keyring_linux -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:08.939 08:17:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:08.939 [2024-06-10 08:17:30.671101] tcp.c: 716:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.939 null0 00:22:08.939 [2024-06-10 08:17:30.703074] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:08.939 [2024-06-10 08:17:30.703344] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:08.939 08:17:30 keyring_linux -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:08.939 08:17:30 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:22:08.939 3197339 00:22:08.939 08:17:30 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:22:08.939 89687853 00:22:08.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:08.939 08:17:30 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85621 00:22:08.939 08:17:30 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:22:08.939 08:17:30 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85621 /var/tmp/bperf.sock 00:22:08.939 08:17:30 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 85621 ']' 00:22:08.939 08:17:30 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:08.939 08:17:30 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:08.939 08:17:30 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:08.939 08:17:30 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:08.939 08:17:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:08.939 [2024-06-10 08:17:30.789423] Starting SPDK v24.09-pre git sha1 3a44739b7 / DPDK 24.03.0 initialization... 00:22:08.939 [2024-06-10 08:17:30.789916] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85621 ] 00:22:09.199 [2024-06-10 08:17:30.932056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.199 [2024-06-10 08:17:31.060572] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.137 08:17:31 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:10.137 08:17:31 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:22:10.137 08:17:31 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:22:10.137 08:17:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:22:10.396 08:17:32 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:22:10.396 08:17:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:10.655 [2024-06-10 08:17:32.319980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:10.655 08:17:32 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:10.655 08:17:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:10.915 [2024-06-10 08:17:32.603776] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:10.915 nvme0n1 00:22:10.915 08:17:32 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:22:10.915 08:17:32 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:22:10.915 08:17:32 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:10.915 08:17:32 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:10.915 08:17:32 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:10.915 08:17:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:11.174 08:17:32 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:22:11.174 08:17:32 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:11.174 08:17:32 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:22:11.174 08:17:32 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:22:11.174 08:17:32 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:11.174 08:17:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:11.174 08:17:32 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:22:11.434 08:17:33 keyring_linux -- keyring/linux.sh@25 -- # sn=3197339 00:22:11.434 08:17:33 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:22:11.434 08:17:33 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:11.434 08:17:33 keyring_linux -- keyring/linux.sh@26 -- # [[ 3197339 == \3\1\9\7\3\3\9 ]] 00:22:11.434 08:17:33 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 3197339 00:22:11.434 08:17:33 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:22:11.434 08:17:33 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:11.694 Running I/O for 1 seconds... 00:22:12.631 00:22:12.631 Latency(us) 00:22:12.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.631 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:12.631 nvme0n1 : 1.01 13159.88 51.41 0.00 0.00 9670.50 8162.21 18588.39 00:22:12.631 =================================================================================================================== 00:22:12.631 Total : 13159.88 51.41 0.00 0.00 9670.50 8162.21 18588.39 00:22:12.631 0 00:22:12.631 08:17:34 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:12.631 08:17:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:12.891 08:17:34 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:22:12.891 08:17:34 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:22:12.891 08:17:34 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:12.891 08:17:34 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:12.891 08:17:34 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:12.891 08:17:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:13.150 08:17:34 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:22:13.150 08:17:34 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:13.150 08:17:34 keyring_linux -- keyring/linux.sh@23 -- # return 00:22:13.150 08:17:34 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:13.150 08:17:34 keyring_linux -- common/autotest_common.sh@649 -- # local es=0 00:22:13.150 08:17:34 keyring_linux -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:13.150 08:17:34 keyring_linux -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:22:13.150 08:17:34 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:13.150 08:17:34 keyring_linux -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:22:13.150 08:17:34 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:13.150 08:17:34 keyring_linux -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:13.150 08:17:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:13.410 [2024-06-10 08:17:35.156801] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:13.410 [2024-06-10 08:17:35.157457] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc23a0 (107): Transport endpoint is not connected 00:22:13.410 [2024-06-10 08:17:35.158445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc23a0 (9): Bad file descriptor 00:22:13.410 [2024-06-10 08:17:35.159442] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:13.410 [2024-06-10 08:17:35.159464] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:13.410 [2024-06-10 08:17:35.159475] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:13.410 request: 00:22:13.410 { 00:22:13.410 "name": "nvme0", 00:22:13.410 "trtype": "tcp", 00:22:13.410 "traddr": "127.0.0.1", 00:22:13.410 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:13.410 "adrfam": "ipv4", 00:22:13.410 "trsvcid": "4420", 00:22:13.410 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:13.410 "psk": ":spdk-test:key1", 00:22:13.410 "method": "bdev_nvme_attach_controller", 00:22:13.410 "req_id": 1 00:22:13.410 } 00:22:13.410 Got JSON-RPC error response 00:22:13.410 response: 00:22:13.410 { 00:22:13.410 "code": -5, 00:22:13.410 "message": "Input/output error" 00:22:13.410 } 00:22:13.410 08:17:35 keyring_linux -- common/autotest_common.sh@652 -- # es=1 00:22:13.410 08:17:35 keyring_linux -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:13.410 08:17:35 keyring_linux -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:13.410 08:17:35 keyring_linux -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:13.410 08:17:35 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:22:13.410 08:17:35 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:13.410 08:17:35 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:22:13.410 08:17:35 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:22:13.410 08:17:35 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:22:13.410 08:17:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:13.410 08:17:35 keyring_linux -- keyring/linux.sh@33 -- # sn=3197339 00:22:13.410 08:17:35 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 3197339 00:22:13.410 1 links removed 00:22:13.410 08:17:35 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:13.410 08:17:35 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:22:13.410 08:17:35 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:22:13.410 08:17:35 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:22:13.410 08:17:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:22:13.410 08:17:35 keyring_linux -- keyring/linux.sh@33 -- # sn=89687853 00:22:13.410 08:17:35 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 89687853 00:22:13.410 1 links removed 00:22:13.410 08:17:35 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85621 00:22:13.410 08:17:35 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 85621 ']' 00:22:13.410 08:17:35 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 85621 00:22:13.410 08:17:35 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:22:13.410 08:17:35 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:13.410 08:17:35 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 85621 00:22:13.410 killing process with pid 85621 00:22:13.410 Received shutdown signal, test time was about 1.000000 seconds 00:22:13.410 00:22:13.410 Latency(us) 00:22:13.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.410 =================================================================================================================== 00:22:13.410 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:13.410 08:17:35 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:13.410 08:17:35 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:13.410 08:17:35 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 85621' 00:22:13.410 08:17:35 keyring_linux -- common/autotest_common.sh@968 -- # kill 85621 00:22:13.410 08:17:35 keyring_linux -- common/autotest_common.sh@973 -- # wait 85621 00:22:13.669 08:17:35 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85603 00:22:13.669 08:17:35 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 85603 ']' 00:22:13.669 08:17:35 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 85603 00:22:13.669 08:17:35 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:22:13.669 08:17:35 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:13.669 08:17:35 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 85603 00:22:13.669 killing process with pid 85603 00:22:13.669 08:17:35 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:13.669 08:17:35 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:13.669 08:17:35 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 85603' 00:22:13.669 08:17:35 keyring_linux -- common/autotest_common.sh@968 -- # kill 85603 00:22:13.669 08:17:35 keyring_linux -- common/autotest_common.sh@973 -- # wait 85603 00:22:14.238 ************************************ 00:22:14.238 END TEST keyring_linux 00:22:14.238 ************************************ 00:22:14.238 00:22:14.238 real 0m6.500s 00:22:14.238 user 0m12.662s 00:22:14.238 sys 0m1.584s 00:22:14.238 08:17:35 keyring_linux -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:14.238 08:17:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:14.238 08:17:35 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:22:14.238 08:17:35 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:22:14.238 08:17:35 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:22:14.238 08:17:35 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:22:14.238 08:17:35 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:22:14.238 08:17:35 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:22:14.238 08:17:35 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:22:14.238 08:17:35 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:22:14.238 08:17:35 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:22:14.238 08:17:35 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:22:14.238 08:17:35 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:22:14.238 08:17:35 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:22:14.238 08:17:35 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:22:14.238 08:17:35 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:22:14.238 08:17:35 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:22:14.238 08:17:35 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:22:14.238 08:17:35 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:22:14.238 08:17:35 -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:14.238 08:17:35 -- common/autotest_common.sh@10 -- # set +x 00:22:14.238 08:17:35 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:22:14.238 08:17:35 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:22:14.238 08:17:35 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:22:14.238 08:17:35 -- common/autotest_common.sh@10 -- # set +x 00:22:16.206 INFO: APP EXITING 00:22:16.206 INFO: killing all VMs 00:22:16.206 INFO: killing vhost app 00:22:16.206 INFO: EXIT DONE 00:22:16.775 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:16.775 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:16.775 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:17.343 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:17.343 Cleaning 00:22:17.343 Removing: /var/run/dpdk/spdk0/config 00:22:17.343 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:17.343 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:17.343 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:17.343 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:17.343 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:17.343 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:17.343 Removing: /var/run/dpdk/spdk1/config 00:22:17.343 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:17.343 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:17.343 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:17.343 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:17.343 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:17.602 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:17.602 Removing: /var/run/dpdk/spdk2/config 00:22:17.602 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:17.602 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:17.602 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:17.602 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:17.602 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:17.602 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:17.602 Removing: /var/run/dpdk/spdk3/config 00:22:17.602 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:17.602 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:17.602 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:17.602 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:17.602 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:17.602 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:17.602 Removing: /var/run/dpdk/spdk4/config 00:22:17.602 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:17.602 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:17.602 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:17.602 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:17.602 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:17.602 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:17.602 Removing: /dev/shm/nvmf_trace.0 00:22:17.602 Removing: /dev/shm/spdk_tgt_trace.pid58635 00:22:17.602 Removing: /var/run/dpdk/spdk0 00:22:17.602 Removing: /var/run/dpdk/spdk1 00:22:17.602 Removing: /var/run/dpdk/spdk2 00:22:17.602 Removing: /var/run/dpdk/spdk3 00:22:17.602 Removing: /var/run/dpdk/spdk4 00:22:17.602 Removing: /var/run/dpdk/spdk_pid58485 00:22:17.602 Removing: /var/run/dpdk/spdk_pid58635 00:22:17.602 Removing: /var/run/dpdk/spdk_pid58833 00:22:17.602 Removing: /var/run/dpdk/spdk_pid58920 00:22:17.602 Removing: /var/run/dpdk/spdk_pid58953 00:22:17.602 Removing: /var/run/dpdk/spdk_pid59057 00:22:17.602 Removing: /var/run/dpdk/spdk_pid59075 00:22:17.602 Removing: /var/run/dpdk/spdk_pid59204 00:22:17.602 Removing: /var/run/dpdk/spdk_pid59389 00:22:17.602 Removing: /var/run/dpdk/spdk_pid59535 00:22:17.602 Removing: /var/run/dpdk/spdk_pid59606 00:22:17.602 Removing: /var/run/dpdk/spdk_pid59682 00:22:17.602 Removing: /var/run/dpdk/spdk_pid59773 00:22:17.602 Removing: /var/run/dpdk/spdk_pid59850 00:22:17.602 Removing: /var/run/dpdk/spdk_pid59889 00:22:17.602 Removing: /var/run/dpdk/spdk_pid59924 00:22:17.602 Removing: /var/run/dpdk/spdk_pid59986 00:22:17.602 Removing: /var/run/dpdk/spdk_pid60085 00:22:17.602 Removing: /var/run/dpdk/spdk_pid60528 00:22:17.602 Removing: /var/run/dpdk/spdk_pid60570 00:22:17.602 Removing: /var/run/dpdk/spdk_pid60627 00:22:17.602 Removing: /var/run/dpdk/spdk_pid60643 00:22:17.602 Removing: /var/run/dpdk/spdk_pid60715 00:22:17.602 Removing: /var/run/dpdk/spdk_pid60731 00:22:17.602 Removing: /var/run/dpdk/spdk_pid60798 00:22:17.602 Removing: /var/run/dpdk/spdk_pid60814 00:22:17.602 Removing: /var/run/dpdk/spdk_pid60865 00:22:17.602 Removing: /var/run/dpdk/spdk_pid60883 00:22:17.602 Removing: /var/run/dpdk/spdk_pid60923 00:22:17.602 Removing: /var/run/dpdk/spdk_pid60941 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61069 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61105 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61179 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61231 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61255 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61314 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61354 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61383 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61423 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61456 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61492 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61532 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61561 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61601 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61630 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61670 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61701 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61741 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61776 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61810 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61848 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61882 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61920 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61963 00:22:17.602 Removing: /var/run/dpdk/spdk_pid61996 00:22:17.602 Removing: /var/run/dpdk/spdk_pid62033 00:22:17.602 Removing: /var/run/dpdk/spdk_pid62103 00:22:17.602 Removing: /var/run/dpdk/spdk_pid62190 00:22:17.602 Removing: /var/run/dpdk/spdk_pid62498 00:22:17.602 Removing: /var/run/dpdk/spdk_pid62516 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62547 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62566 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62576 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62606 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62614 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62635 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62654 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62673 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62683 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62708 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62721 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62742 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62761 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62776 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62790 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62817 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62830 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62846 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62882 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62895 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62929 00:22:17.861 Removing: /var/run/dpdk/spdk_pid62989 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63017 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63027 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63061 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63070 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63078 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63126 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63134 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63168 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63177 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63187 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63202 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63206 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63221 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63230 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63240 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63274 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63299 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63310 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63344 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63348 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63361 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63406 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63413 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63445 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63453 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63460 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63473 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63480 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63488 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63501 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63503 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63577 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63630 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63739 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63774 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63819 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63839 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63855 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63870 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63908 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63929 00:22:17.861 Removing: /var/run/dpdk/spdk_pid63999 00:22:17.861 Removing: /var/run/dpdk/spdk_pid64015 00:22:17.861 Removing: /var/run/dpdk/spdk_pid64059 00:22:17.861 Removing: /var/run/dpdk/spdk_pid64128 00:22:17.861 Removing: /var/run/dpdk/spdk_pid64191 00:22:17.861 Removing: /var/run/dpdk/spdk_pid64232 00:22:17.861 Removing: /var/run/dpdk/spdk_pid64323 00:22:17.861 Removing: /var/run/dpdk/spdk_pid64366 00:22:17.861 Removing: /var/run/dpdk/spdk_pid64404 00:22:17.861 Removing: /var/run/dpdk/spdk_pid64621 00:22:17.861 Removing: /var/run/dpdk/spdk_pid64721 00:22:17.861 Removing: /var/run/dpdk/spdk_pid64744 00:22:17.861 Removing: /var/run/dpdk/spdk_pid65064 00:22:17.861 Removing: /var/run/dpdk/spdk_pid65103 00:22:17.861 Removing: /var/run/dpdk/spdk_pid65397 00:22:17.861 Removing: /var/run/dpdk/spdk_pid65809 00:22:17.861 Removing: /var/run/dpdk/spdk_pid66078 00:22:17.861 Removing: /var/run/dpdk/spdk_pid66863 00:22:17.861 Removing: /var/run/dpdk/spdk_pid67678 00:22:17.861 Removing: /var/run/dpdk/spdk_pid67800 00:22:17.861 Removing: /var/run/dpdk/spdk_pid67868 00:22:17.861 Removing: /var/run/dpdk/spdk_pid69139 00:22:17.861 Removing: /var/run/dpdk/spdk_pid69344 00:22:17.861 Removing: /var/run/dpdk/spdk_pid72678 00:22:17.861 Removing: /var/run/dpdk/spdk_pid72988 00:22:18.120 Removing: /var/run/dpdk/spdk_pid73097 00:22:18.120 Removing: /var/run/dpdk/spdk_pid73231 00:22:18.120 Removing: /var/run/dpdk/spdk_pid73264 00:22:18.120 Removing: /var/run/dpdk/spdk_pid73286 00:22:18.120 Removing: /var/run/dpdk/spdk_pid73314 00:22:18.120 Removing: /var/run/dpdk/spdk_pid73407 00:22:18.120 Removing: /var/run/dpdk/spdk_pid73536 00:22:18.120 Removing: /var/run/dpdk/spdk_pid73686 00:22:18.120 Removing: /var/run/dpdk/spdk_pid73767 00:22:18.120 Removing: /var/run/dpdk/spdk_pid73961 00:22:18.120 Removing: /var/run/dpdk/spdk_pid74044 00:22:18.120 Removing: /var/run/dpdk/spdk_pid74137 00:22:18.120 Removing: /var/run/dpdk/spdk_pid74442 00:22:18.120 Removing: /var/run/dpdk/spdk_pid74826 00:22:18.120 Removing: /var/run/dpdk/spdk_pid74829 00:22:18.120 Removing: /var/run/dpdk/spdk_pid75104 00:22:18.120 Removing: /var/run/dpdk/spdk_pid75118 00:22:18.120 Removing: /var/run/dpdk/spdk_pid75132 00:22:18.120 Removing: /var/run/dpdk/spdk_pid75163 00:22:18.120 Removing: /var/run/dpdk/spdk_pid75168 00:22:18.120 Removing: /var/run/dpdk/spdk_pid75470 00:22:18.120 Removing: /var/run/dpdk/spdk_pid75514 00:22:18.120 Removing: /var/run/dpdk/spdk_pid75797 00:22:18.120 Removing: /var/run/dpdk/spdk_pid75994 00:22:18.120 Removing: /var/run/dpdk/spdk_pid76379 00:22:18.120 Removing: /var/run/dpdk/spdk_pid76882 00:22:18.120 Removing: /var/run/dpdk/spdk_pid77697 00:22:18.120 Removing: /var/run/dpdk/spdk_pid78282 00:22:18.120 Removing: /var/run/dpdk/spdk_pid78284 00:22:18.120 Removing: /var/run/dpdk/spdk_pid80170 00:22:18.120 Removing: /var/run/dpdk/spdk_pid80236 00:22:18.120 Removing: /var/run/dpdk/spdk_pid80291 00:22:18.120 Removing: /var/run/dpdk/spdk_pid80353 00:22:18.120 Removing: /var/run/dpdk/spdk_pid80474 00:22:18.120 Removing: /var/run/dpdk/spdk_pid80529 00:22:18.120 Removing: /var/run/dpdk/spdk_pid80589 00:22:18.120 Removing: /var/run/dpdk/spdk_pid80644 00:22:18.120 Removing: /var/run/dpdk/spdk_pid80966 00:22:18.120 Removing: /var/run/dpdk/spdk_pid82126 00:22:18.120 Removing: /var/run/dpdk/spdk_pid82266 00:22:18.120 Removing: /var/run/dpdk/spdk_pid82509 00:22:18.120 Removing: /var/run/dpdk/spdk_pid83060 00:22:18.120 Removing: /var/run/dpdk/spdk_pid83219 00:22:18.120 Removing: /var/run/dpdk/spdk_pid83375 00:22:18.120 Removing: /var/run/dpdk/spdk_pid83468 00:22:18.120 Removing: /var/run/dpdk/spdk_pid83642 00:22:18.120 Removing: /var/run/dpdk/spdk_pid83751 00:22:18.120 Removing: /var/run/dpdk/spdk_pid84412 00:22:18.120 Removing: /var/run/dpdk/spdk_pid84443 00:22:18.120 Removing: /var/run/dpdk/spdk_pid84478 00:22:18.120 Removing: /var/run/dpdk/spdk_pid84731 00:22:18.120 Removing: /var/run/dpdk/spdk_pid84765 00:22:18.120 Removing: /var/run/dpdk/spdk_pid84799 00:22:18.120 Removing: /var/run/dpdk/spdk_pid85223 00:22:18.120 Removing: /var/run/dpdk/spdk_pid85240 00:22:18.120 Removing: /var/run/dpdk/spdk_pid85489 00:22:18.120 Removing: /var/run/dpdk/spdk_pid85603 00:22:18.120 Removing: /var/run/dpdk/spdk_pid85621 00:22:18.120 Clean 00:22:18.120 08:17:39 -- common/autotest_common.sh@1450 -- # return 0 00:22:18.120 08:17:39 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:22:18.120 08:17:39 -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:18.120 08:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:18.378 08:17:39 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:22:18.378 08:17:39 -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:18.379 08:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:18.379 08:17:40 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:18.379 08:17:40 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:18.379 08:17:40 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:18.379 08:17:40 -- spdk/autotest.sh@391 -- # hash lcov 00:22:18.379 08:17:40 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:22:18.379 08:17:40 -- spdk/autotest.sh@393 -- # hostname 00:22:18.379 08:17:40 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:18.637 geninfo: WARNING: invalid characters removed from testname! 00:22:45.175 08:18:05 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:48.460 08:18:09 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:50.990 08:18:12 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:53.519 08:18:14 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:56.052 08:18:17 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:58.584 08:18:20 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:01.116 08:18:22 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:01.116 08:18:22 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:01.116 08:18:22 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:23:01.116 08:18:22 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.116 08:18:22 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.116 08:18:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.116 08:18:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.116 08:18:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.116 08:18:22 -- paths/export.sh@5 -- $ export PATH 00:23:01.116 08:18:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.116 08:18:22 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:23:01.116 08:18:22 -- common/autobuild_common.sh@437 -- $ date +%s 00:23:01.116 08:18:22 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718007502.XXXXXX 00:23:01.116 08:18:22 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718007502.WTqGdY 00:23:01.116 08:18:22 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:23:01.116 08:18:22 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:23:01.116 08:18:22 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:23:01.116 08:18:22 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:23:01.116 08:18:22 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:23:01.116 08:18:22 -- common/autobuild_common.sh@453 -- $ get_config_params 00:23:01.116 08:18:22 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:23:01.116 08:18:22 -- common/autotest_common.sh@10 -- $ set +x 00:23:01.116 08:18:22 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:23:01.116 08:18:22 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:23:01.116 08:18:22 -- pm/common@17 -- $ local monitor 00:23:01.116 08:18:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:01.116 08:18:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:01.116 08:18:22 -- pm/common@25 -- $ sleep 1 00:23:01.116 08:18:22 -- pm/common@21 -- $ date +%s 00:23:01.116 08:18:22 -- pm/common@21 -- $ date +%s 00:23:01.116 08:18:22 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1718007502 00:23:01.116 08:18:22 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1718007502 00:23:01.116 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1718007502_collect-vmstat.pm.log 00:23:01.116 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1718007502_collect-cpu-load.pm.log 00:23:02.052 08:18:23 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:23:02.052 08:18:23 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:23:02.052 08:18:23 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:23:02.052 08:18:23 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:23:02.052 08:18:23 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:23:02.052 08:18:23 -- spdk/autopackage.sh@19 -- $ timing_finish 00:23:02.052 08:18:23 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:02.052 08:18:23 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:23:02.052 08:18:23 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:02.311 08:18:23 -- spdk/autopackage.sh@20 -- $ exit 0 00:23:02.311 08:18:23 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:23:02.311 08:18:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:23:02.311 08:18:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:23:02.311 08:18:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:02.311 08:18:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:23:02.311 08:18:23 -- pm/common@44 -- $ pid=87372 00:23:02.311 08:18:23 -- pm/common@50 -- $ kill -TERM 87372 00:23:02.311 08:18:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:02.311 08:18:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:23:02.311 08:18:23 -- pm/common@44 -- $ pid=87373 00:23:02.311 08:18:23 -- pm/common@50 -- $ kill -TERM 87373 00:23:02.311 + [[ -n 5104 ]] 00:23:02.311 + sudo kill 5104 00:23:02.322 [Pipeline] } 00:23:02.346 [Pipeline] // timeout 00:23:02.352 [Pipeline] } 00:23:02.371 [Pipeline] // stage 00:23:02.376 [Pipeline] } 00:23:02.395 [Pipeline] // catchError 00:23:02.405 [Pipeline] stage 00:23:02.407 [Pipeline] { (Stop VM) 00:23:02.419 [Pipeline] sh 00:23:02.698 + vagrant halt 00:23:06.885 ==> default: Halting domain... 00:23:12.164 [Pipeline] sh 00:23:12.443 + vagrant destroy -f 00:23:16.631 ==> default: Removing domain... 00:23:16.645 [Pipeline] sh 00:23:16.926 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:23:16.936 [Pipeline] } 00:23:16.955 [Pipeline] // stage 00:23:16.961 [Pipeline] } 00:23:16.980 [Pipeline] // dir 00:23:16.988 [Pipeline] } 00:23:17.006 [Pipeline] // wrap 00:23:17.012 [Pipeline] } 00:23:17.029 [Pipeline] // catchError 00:23:17.040 [Pipeline] stage 00:23:17.043 [Pipeline] { (Epilogue) 00:23:17.061 [Pipeline] sh 00:23:17.341 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:23.927 [Pipeline] catchError 00:23:23.929 [Pipeline] { 00:23:23.942 [Pipeline] sh 00:23:24.221 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:24.221 Artifacts sizes are good 00:23:24.229 [Pipeline] } 00:23:24.244 [Pipeline] // catchError 00:23:24.253 [Pipeline] archiveArtifacts 00:23:24.259 Archiving artifacts 00:23:24.427 [Pipeline] cleanWs 00:23:24.438 [WS-CLEANUP] Deleting project workspace... 00:23:24.438 [WS-CLEANUP] Deferred wipeout is used... 00:23:24.444 [WS-CLEANUP] done 00:23:24.446 [Pipeline] } 00:23:24.462 [Pipeline] // stage 00:23:24.467 [Pipeline] } 00:23:24.483 [Pipeline] // node 00:23:24.489 [Pipeline] End of Pipeline 00:23:24.543 Finished: SUCCESS